00:00:00.001 Started by upstream project "autotest-per-patch" build number 132092 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.017 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.018 The recommended git tool is: git 00:00:00.018 using credential 00000000-0000-0000-0000-000000000002 00:00:00.021 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.036 Fetching changes from the remote Git repository 00:00:00.038 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.063 Using shallow fetch with depth 1 00:00:00.063 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.063 > git --version # timeout=10 00:00:00.122 > git --version # 'git version 2.39.2' 00:00:00.122 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.199 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.200 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.354 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.370 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.387 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:02.387 > git config core.sparsecheckout # timeout=10 00:00:02.401 > git read-tree -mu HEAD # timeout=10 00:00:02.422 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:02.445 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:02.445 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:02.567 [Pipeline] Start of Pipeline 00:00:02.581 [Pipeline] library 00:00:02.583 Loading library shm_lib@master 00:00:02.583 Library shm_lib@master is cached. Copying from home. 00:00:02.600 [Pipeline] node 00:00:02.609 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:02.611 [Pipeline] { 00:00:02.622 [Pipeline] catchError 00:00:02.623 [Pipeline] { 00:00:02.637 [Pipeline] wrap 00:00:02.647 [Pipeline] { 00:00:02.655 [Pipeline] stage 00:00:02.657 [Pipeline] { (Prologue) 00:00:02.673 [Pipeline] echo 00:00:02.674 Node: VM-host-SM17 00:00:02.681 [Pipeline] cleanWs 00:00:02.689 [WS-CLEANUP] Deleting project workspace... 00:00:02.689 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.694 [WS-CLEANUP] done 00:00:02.879 [Pipeline] setCustomBuildProperty 00:00:02.970 [Pipeline] httpRequest 00:00:03.349 [Pipeline] echo 00:00:03.350 Sorcerer 10.211.164.101 is alive 00:00:03.357 [Pipeline] retry 00:00:03.359 [Pipeline] { 00:00:03.370 [Pipeline] httpRequest 00:00:03.373 HttpMethod: GET 00:00:03.373 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.374 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:03.378 Response Code: HTTP/1.1 200 OK 00:00:03.379 Success: Status code 200 is in the accepted range: 200,404 00:00:03.379 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:21.600 [Pipeline] } 00:00:21.618 [Pipeline] // retry 00:00:21.626 [Pipeline] sh 00:00:21.961 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:21.978 [Pipeline] httpRequest 00:00:22.406 [Pipeline] echo 00:00:22.408 Sorcerer 10.211.164.101 is alive 00:00:22.416 [Pipeline] retry 00:00:22.418 [Pipeline] { 00:00:22.433 [Pipeline] httpRequest 00:00:22.438 HttpMethod: GET 00:00:22.439 URL: http://10.211.164.101/packages/spdk_a59d7e018d7adc641bba077b8a8765ac4e880d7b.tar.gz 00:00:22.440 Sending request to url: http://10.211.164.101/packages/spdk_a59d7e018d7adc641bba077b8a8765ac4e880d7b.tar.gz 00:00:22.442 Response Code: HTTP/1.1 200 OK 00:00:22.443 Success: Status code 200 is in the accepted range: 200,404 00:00:22.443 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_a59d7e018d7adc641bba077b8a8765ac4e880d7b.tar.gz 00:05:41.547 [Pipeline] } 00:05:41.562 [Pipeline] // retry 00:05:41.568 [Pipeline] sh 00:05:41.842 + tar --no-same-owner -xf spdk_a59d7e018d7adc641bba077b8a8765ac4e880d7b.tar.gz 00:05:45.134 [Pipeline] sh 00:05:45.417 + git -C spdk log --oneline -n5 00:05:45.418 a59d7e018 lib/mlx5: Add API to check if UMR registration supported 00:05:45.418 f6925f5e4 accel/mlx5: Merge crypto+copy to reg UMR 00:05:45.418 008a6371b accel/mlx5: Initial implementation of mlx5 platform driver 00:05:45.418 cc533a3e5 nvme/nvme: Factor out submit_request function 00:05:45.418 117895738 accel/mlx5: Factor out task submissions 00:05:45.438 [Pipeline] writeFile 00:05:45.452 [Pipeline] sh 00:05:45.733 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:45.745 [Pipeline] sh 00:05:46.027 + cat autorun-spdk.conf 00:05:46.027 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:46.027 SPDK_RUN_ASAN=1 00:05:46.027 SPDK_RUN_UBSAN=1 00:05:46.027 SPDK_TEST_RAID=1 00:05:46.027 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:46.034 RUN_NIGHTLY=0 00:05:46.036 [Pipeline] } 00:05:46.049 [Pipeline] // stage 00:05:46.064 [Pipeline] stage 00:05:46.066 [Pipeline] { (Run VM) 00:05:46.081 [Pipeline] sh 00:05:46.365 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:46.365 + echo 'Start stage prepare_nvme.sh' 00:05:46.365 Start stage prepare_nvme.sh 00:05:46.365 + [[ -n 7 ]] 00:05:46.365 + disk_prefix=ex7 00:05:46.365 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:05:46.365 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:05:46.365 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:05:46.365 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:46.365 ++ SPDK_RUN_ASAN=1 00:05:46.365 ++ SPDK_RUN_UBSAN=1 00:05:46.365 ++ SPDK_TEST_RAID=1 00:05:46.365 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:46.365 ++ RUN_NIGHTLY=0 00:05:46.365 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:05:46.365 + nvme_files=() 00:05:46.365 + declare -A nvme_files 00:05:46.365 + backend_dir=/var/lib/libvirt/images/backends 00:05:46.365 + nvme_files['nvme.img']=5G 00:05:46.365 + nvme_files['nvme-cmb.img']=5G 00:05:46.365 + nvme_files['nvme-multi0.img']=4G 00:05:46.365 + nvme_files['nvme-multi1.img']=4G 00:05:46.365 + nvme_files['nvme-multi2.img']=4G 00:05:46.365 + nvme_files['nvme-openstack.img']=8G 00:05:46.365 + nvme_files['nvme-zns.img']=5G 00:05:46.365 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:46.365 + (( SPDK_TEST_FTL == 1 )) 00:05:46.365 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:46.365 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:46.365 + for nvme in "${!nvme_files[@]}" 00:05:46.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:05:46.365 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:46.365 + for nvme in "${!nvme_files[@]}" 00:05:46.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:05:46.365 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:46.365 + for nvme in "${!nvme_files[@]}" 00:05:46.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:05:46.365 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:46.365 + for nvme in "${!nvme_files[@]}" 00:05:46.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:05:46.365 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:46.365 + for nvme in "${!nvme_files[@]}" 00:05:46.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:05:46.365 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:46.365 + for nvme in "${!nvme_files[@]}" 00:05:46.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:05:46.365 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:46.365 + for nvme in "${!nvme_files[@]}" 00:05:46.365 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:05:47.299 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:47.299 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:05:47.299 + echo 'End stage prepare_nvme.sh' 00:05:47.299 End stage prepare_nvme.sh 00:05:47.310 [Pipeline] sh 00:05:47.591 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:47.591 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:05:47.591 00:05:47.591 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:05:47.591 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:05:47.591 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:05:47.591 HELP=0 00:05:47.591 DRY_RUN=0 00:05:47.591 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:05:47.591 NVME_DISKS_TYPE=nvme,nvme, 00:05:47.591 NVME_AUTO_CREATE=0 00:05:47.591 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:05:47.591 NVME_CMB=,, 00:05:47.591 NVME_PMR=,, 00:05:47.591 NVME_ZNS=,, 00:05:47.591 NVME_MS=,, 00:05:47.591 NVME_FDP=,, 00:05:47.591 SPDK_VAGRANT_DISTRO=fedora39 00:05:47.591 SPDK_VAGRANT_VMCPU=10 00:05:47.591 SPDK_VAGRANT_VMRAM=12288 00:05:47.591 SPDK_VAGRANT_PROVIDER=libvirt 00:05:47.591 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:47.591 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:47.591 SPDK_OPENSTACK_NETWORK=0 00:05:47.591 VAGRANT_PACKAGE_BOX=0 00:05:47.591 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:05:47.591 FORCE_DISTRO=true 00:05:47.591 VAGRANT_BOX_VERSION= 00:05:47.591 EXTRA_VAGRANTFILES= 00:05:47.591 NIC_MODEL=e1000 00:05:47.591 00:05:47.591 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:05:47.592 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:05:50.888 Bringing machine 'default' up with 'libvirt' provider... 00:05:51.147 ==> default: Creating image (snapshot of base box volume). 00:05:51.147 ==> default: Creating domain with the following settings... 00:05:51.147 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730883507_4c96b2d5d566e2a404a8 00:05:51.147 ==> default: -- Domain type: kvm 00:05:51.147 ==> default: -- Cpus: 10 00:05:51.147 ==> default: -- Feature: acpi 00:05:51.147 ==> default: -- Feature: apic 00:05:51.147 ==> default: -- Feature: pae 00:05:51.147 ==> default: -- Memory: 12288M 00:05:51.147 ==> default: -- Memory Backing: hugepages: 00:05:51.147 ==> default: -- Management MAC: 00:05:51.147 ==> default: -- Loader: 00:05:51.147 ==> default: -- Nvram: 00:05:51.147 ==> default: -- Base box: spdk/fedora39 00:05:51.147 ==> default: -- Storage pool: default 00:05:51.147 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730883507_4c96b2d5d566e2a404a8.img (20G) 00:05:51.147 ==> default: -- Volume Cache: default 00:05:51.147 ==> default: -- Kernel: 00:05:51.147 ==> default: -- Initrd: 00:05:51.147 ==> default: -- Graphics Type: vnc 00:05:51.147 ==> default: -- Graphics Port: -1 00:05:51.147 ==> default: -- Graphics IP: 127.0.0.1 00:05:51.147 ==> default: -- Graphics Password: Not defined 00:05:51.147 ==> default: -- Video Type: cirrus 00:05:51.147 ==> default: -- Video VRAM: 9216 00:05:51.147 ==> default: -- Sound Type: 00:05:51.147 ==> default: -- Keymap: en-us 00:05:51.147 ==> default: -- TPM Path: 00:05:51.147 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:51.147 ==> default: -- Command line args: 00:05:51.147 ==> default: -> value=-device, 00:05:51.147 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:51.147 ==> default: -> value=-drive, 00:05:51.147 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:05:51.147 ==> default: -> value=-device, 00:05:51.147 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:51.147 ==> default: -> value=-device, 00:05:51.147 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:51.147 ==> default: -> value=-drive, 00:05:51.147 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:05:51.147 ==> default: -> value=-device, 00:05:51.147 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:51.147 ==> default: -> value=-drive, 00:05:51.147 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:05:51.147 ==> default: -> value=-device, 00:05:51.147 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:51.147 ==> default: -> value=-drive, 00:05:51.147 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:05:51.147 ==> default: -> value=-device, 00:05:51.147 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:51.407 ==> default: Creating shared folders metadata... 00:05:51.407 ==> default: Starting domain. 00:05:53.309 ==> default: Waiting for domain to get an IP address... 00:06:08.191 ==> default: Waiting for SSH to become available... 00:06:09.128 ==> default: Configuring and enabling network interfaces... 00:06:13.312 default: SSH address: 192.168.121.40:22 00:06:13.312 default: SSH username: vagrant 00:06:13.312 default: SSH auth method: private key 00:06:15.841 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:24.064 ==> default: Mounting SSHFS shared folder... 00:06:24.687 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:06:24.687 ==> default: Checking Mount.. 00:06:26.063 ==> default: Folder Successfully Mounted! 00:06:26.063 ==> default: Running provisioner: file... 00:06:26.630 default: ~/.gitconfig => .gitconfig 00:06:27.204 00:06:27.204 SUCCESS! 00:06:27.204 00:06:27.204 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:06:27.204 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:27.204 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:06:27.204 00:06:27.234 [Pipeline] } 00:06:27.266 [Pipeline] // stage 00:06:27.272 [Pipeline] dir 00:06:27.273 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:06:27.274 [Pipeline] { 00:06:27.283 [Pipeline] catchError 00:06:27.284 [Pipeline] { 00:06:27.293 [Pipeline] sh 00:06:27.570 + vagrant ssh-config --host vagrant 00:06:27.570 + sed -ne /^Host/,$p 00:06:27.570 + tee ssh_conf 00:06:30.930 Host vagrant 00:06:30.930 HostName 192.168.121.40 00:06:30.930 User vagrant 00:06:30.930 Port 22 00:06:30.930 UserKnownHostsFile /dev/null 00:06:30.930 StrictHostKeyChecking no 00:06:30.930 PasswordAuthentication no 00:06:30.930 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:06:30.930 IdentitiesOnly yes 00:06:30.930 LogLevel FATAL 00:06:30.930 ForwardAgent yes 00:06:30.930 ForwardX11 yes 00:06:30.930 00:06:30.944 [Pipeline] withEnv 00:06:30.947 [Pipeline] { 00:06:30.961 [Pipeline] sh 00:06:31.254 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:31.254 source /etc/os-release 00:06:31.254 [[ -e /image.version ]] && img=$(< /image.version) 00:06:31.254 # Minimal, systemd-like check. 00:06:31.254 if [[ -e /.dockerenv ]]; then 00:06:31.254 # Clear garbage from the node's name: 00:06:31.254 # agt-er_autotest_547-896 -> autotest_547-896 00:06:31.254 # $HOSTNAME is the actual container id 00:06:31.254 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:31.254 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:31.254 # We can assume this is a mount from a host where container is running, 00:06:31.254 # so fetch its hostname to easily identify the target swarm worker. 00:06:31.254 container="$(< /etc/hostname) ($agent)" 00:06:31.254 else 00:06:31.254 # Fallback 00:06:31.254 container=$agent 00:06:31.254 fi 00:06:31.254 fi 00:06:31.254 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:31.254 00:06:31.266 [Pipeline] } 00:06:31.281 [Pipeline] // withEnv 00:06:31.287 [Pipeline] setCustomBuildProperty 00:06:31.299 [Pipeline] stage 00:06:31.301 [Pipeline] { (Tests) 00:06:31.317 [Pipeline] sh 00:06:31.598 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:31.903 [Pipeline] sh 00:06:32.184 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:32.197 [Pipeline] timeout 00:06:32.198 Timeout set to expire in 1 hr 30 min 00:06:32.199 [Pipeline] { 00:06:32.213 [Pipeline] sh 00:06:32.491 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:33.057 HEAD is now at a59d7e018 lib/mlx5: Add API to check if UMR registration supported 00:06:33.068 [Pipeline] sh 00:06:33.350 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:33.621 [Pipeline] sh 00:06:33.900 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:34.175 [Pipeline] sh 00:06:34.454 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:06:34.713 ++ readlink -f spdk_repo 00:06:34.713 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:34.713 + [[ -n /home/vagrant/spdk_repo ]] 00:06:34.713 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:34.713 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:34.713 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:34.713 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:34.713 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:34.713 + [[ raid-vg-autotest == pkgdep-* ]] 00:06:34.713 + cd /home/vagrant/spdk_repo 00:06:34.713 + source /etc/os-release 00:06:34.713 ++ NAME='Fedora Linux' 00:06:34.713 ++ VERSION='39 (Cloud Edition)' 00:06:34.713 ++ ID=fedora 00:06:34.713 ++ VERSION_ID=39 00:06:34.713 ++ VERSION_CODENAME= 00:06:34.713 ++ PLATFORM_ID=platform:f39 00:06:34.713 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:34.713 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:34.713 ++ LOGO=fedora-logo-icon 00:06:34.713 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:34.713 ++ HOME_URL=https://fedoraproject.org/ 00:06:34.713 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:34.714 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:34.714 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:34.714 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:34.714 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:34.714 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:34.714 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:34.714 ++ SUPPORT_END=2024-11-12 00:06:34.714 ++ VARIANT='Cloud Edition' 00:06:34.714 ++ VARIANT_ID=cloud 00:06:34.714 + uname -a 00:06:34.714 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:34.714 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:34.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:34.972 Hugepages 00:06:34.972 node hugesize free / total 00:06:34.972 node0 1048576kB 0 / 0 00:06:35.230 node0 2048kB 0 / 0 00:06:35.230 00:06:35.230 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:35.230 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:35.230 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:35.230 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:35.230 + rm -f /tmp/spdk-ld-path 00:06:35.230 + source autorun-spdk.conf 00:06:35.230 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:35.230 ++ SPDK_RUN_ASAN=1 00:06:35.230 ++ SPDK_RUN_UBSAN=1 00:06:35.230 ++ SPDK_TEST_RAID=1 00:06:35.230 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:35.230 ++ RUN_NIGHTLY=0 00:06:35.230 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:35.230 + [[ -n '' ]] 00:06:35.230 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:35.230 + for M in /var/spdk/build-*-manifest.txt 00:06:35.230 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:35.230 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:35.230 + for M in /var/spdk/build-*-manifest.txt 00:06:35.230 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:35.230 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:35.230 + for M in /var/spdk/build-*-manifest.txt 00:06:35.230 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:35.230 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:35.231 ++ uname 00:06:35.231 + [[ Linux == \L\i\n\u\x ]] 00:06:35.231 + sudo dmesg -T 00:06:35.231 + sudo dmesg --clear 00:06:35.231 + dmesg_pid=5205 00:06:35.231 + [[ Fedora Linux == FreeBSD ]] 00:06:35.231 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:35.231 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:35.231 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:35.231 + sudo dmesg -Tw 00:06:35.231 + [[ -x /usr/src/fio-static/fio ]] 00:06:35.231 + export FIO_BIN=/usr/src/fio-static/fio 00:06:35.231 + FIO_BIN=/usr/src/fio-static/fio 00:06:35.231 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:35.231 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:35.231 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:35.231 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:35.231 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:35.231 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:35.231 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:35.231 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:35.231 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:35.489 08:59:11 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:06:35.489 08:59:11 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:35.489 08:59:11 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:35.489 08:59:11 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:06:35.489 08:59:11 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:06:35.489 08:59:11 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:06:35.489 08:59:11 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:35.489 08:59:11 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:06:35.489 08:59:11 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:35.489 08:59:11 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:35.489 08:59:11 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:06:35.489 08:59:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:35.489 08:59:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:35.489 08:59:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:35.489 08:59:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.489 08:59:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.489 08:59:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.489 08:59:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.489 08:59:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.489 08:59:11 -- paths/export.sh@5 -- $ export PATH 00:06:35.489 08:59:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.489 08:59:11 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:35.489 08:59:11 -- common/autobuild_common.sh@486 -- $ date +%s 00:06:35.489 08:59:11 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730883551.XXXXXX 00:06:35.489 08:59:11 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730883551.s1UnTu 00:06:35.489 08:59:11 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:06:35.489 08:59:11 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:06:35.489 08:59:11 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:35.489 08:59:11 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:35.489 08:59:11 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:35.489 08:59:11 -- common/autobuild_common.sh@502 -- $ get_config_params 00:06:35.489 08:59:11 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:06:35.489 08:59:11 -- common/autotest_common.sh@10 -- $ set +x 00:06:35.489 08:59:11 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:06:35.489 08:59:11 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:06:35.490 08:59:11 -- pm/common@17 -- $ local monitor 00:06:35.490 08:59:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:35.490 08:59:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:35.490 08:59:11 -- pm/common@25 -- $ sleep 1 00:06:35.490 08:59:11 -- pm/common@21 -- $ date +%s 00:06:35.490 08:59:11 -- pm/common@21 -- $ date +%s 00:06:35.490 08:59:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730883551 00:06:35.490 08:59:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730883551 00:06:35.490 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730883551_collect-cpu-load.pm.log 00:06:35.490 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730883551_collect-vmstat.pm.log 00:06:36.425 08:59:12 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:06:36.425 08:59:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:36.425 08:59:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:36.425 08:59:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:36.425 08:59:12 -- spdk/autobuild.sh@16 -- $ date -u 00:06:36.425 Wed Nov 6 08:59:12 AM UTC 2024 00:06:36.425 08:59:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:36.425 v25.01-pre-169-ga59d7e018 00:06:36.425 08:59:12 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:06:36.425 08:59:12 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:06:36.425 08:59:12 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:06:36.425 08:59:12 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:06:36.425 08:59:12 -- common/autotest_common.sh@10 -- $ set +x 00:06:36.425 ************************************ 00:06:36.425 START TEST asan 00:06:36.425 ************************************ 00:06:36.425 using asan 00:06:36.426 08:59:12 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:06:36.426 00:06:36.426 real 0m0.000s 00:06:36.426 user 0m0.000s 00:06:36.426 sys 0m0.000s 00:06:36.426 08:59:12 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:06:36.426 08:59:12 asan -- common/autotest_common.sh@10 -- $ set +x 00:06:36.426 ************************************ 00:06:36.426 END TEST asan 00:06:36.426 ************************************ 00:06:36.426 08:59:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:36.426 08:59:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:36.426 08:59:12 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:06:36.426 08:59:12 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:06:36.426 08:59:12 -- common/autotest_common.sh@10 -- $ set +x 00:06:36.426 ************************************ 00:06:36.426 START TEST ubsan 00:06:36.426 ************************************ 00:06:36.426 using ubsan 00:06:36.426 08:59:12 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:06:36.426 00:06:36.426 real 0m0.000s 00:06:36.426 user 0m0.000s 00:06:36.426 sys 0m0.000s 00:06:36.426 08:59:12 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:06:36.426 08:59:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:36.426 ************************************ 00:06:36.426 END TEST ubsan 00:06:36.426 ************************************ 00:06:36.684 08:59:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:36.684 08:59:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:36.684 08:59:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:36.684 08:59:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:36.684 08:59:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:36.684 08:59:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:36.684 08:59:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:36.684 08:59:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:36.684 08:59:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:06:36.684 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:36.684 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:37.250 Using 'verbs' RDMA provider 00:06:50.429 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:07:05.298 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:07:05.298 Creating mk/config.mk...done. 00:07:05.298 Creating mk/cc.flags.mk...done. 00:07:05.298 Type 'make' to build. 00:07:05.298 08:59:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:07:05.298 08:59:40 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:07:05.298 08:59:40 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:07:05.298 08:59:40 -- common/autotest_common.sh@10 -- $ set +x 00:07:05.298 ************************************ 00:07:05.298 START TEST make 00:07:05.298 ************************************ 00:07:05.298 08:59:40 make -- common/autotest_common.sh@1127 -- $ make -j10 00:07:05.298 make[1]: Nothing to be done for 'all'. 00:07:17.560 The Meson build system 00:07:17.560 Version: 1.5.0 00:07:17.560 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:17.560 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:17.560 Build type: native build 00:07:17.560 Program cat found: YES (/usr/bin/cat) 00:07:17.560 Project name: DPDK 00:07:17.560 Project version: 24.03.0 00:07:17.560 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:17.560 C linker for the host machine: cc ld.bfd 2.40-14 00:07:17.560 Host machine cpu family: x86_64 00:07:17.560 Host machine cpu: x86_64 00:07:17.560 Message: ## Building in Developer Mode ## 00:07:17.560 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:17.560 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:17.560 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:17.560 Program python3 found: YES (/usr/bin/python3) 00:07:17.560 Program cat found: YES (/usr/bin/cat) 00:07:17.560 Compiler for C supports arguments -march=native: YES 00:07:17.560 Checking for size of "void *" : 8 00:07:17.560 Checking for size of "void *" : 8 (cached) 00:07:17.560 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:17.560 Library m found: YES 00:07:17.560 Library numa found: YES 00:07:17.560 Has header "numaif.h" : YES 00:07:17.560 Library fdt found: NO 00:07:17.560 Library execinfo found: NO 00:07:17.560 Has header "execinfo.h" : YES 00:07:17.560 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:17.560 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:17.560 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:17.560 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:17.560 Run-time dependency openssl found: YES 3.1.1 00:07:17.560 Run-time dependency libpcap found: YES 1.10.4 00:07:17.560 Has header "pcap.h" with dependency libpcap: YES 00:07:17.560 Compiler for C supports arguments -Wcast-qual: YES 00:07:17.560 Compiler for C supports arguments -Wdeprecated: YES 00:07:17.560 Compiler for C supports arguments -Wformat: YES 00:07:17.560 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:17.560 Compiler for C supports arguments -Wformat-security: NO 00:07:17.560 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:17.560 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:17.560 Compiler for C supports arguments -Wnested-externs: YES 00:07:17.560 Compiler for C supports arguments -Wold-style-definition: YES 00:07:17.560 Compiler for C supports arguments -Wpointer-arith: YES 00:07:17.560 Compiler for C supports arguments -Wsign-compare: YES 00:07:17.560 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:17.560 Compiler for C supports arguments -Wundef: YES 00:07:17.560 Compiler for C supports arguments -Wwrite-strings: YES 00:07:17.560 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:17.560 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:17.560 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:17.560 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:17.560 Program objdump found: YES (/usr/bin/objdump) 00:07:17.560 Compiler for C supports arguments -mavx512f: YES 00:07:17.560 Checking if "AVX512 checking" compiles: YES 00:07:17.560 Fetching value of define "__SSE4_2__" : 1 00:07:17.560 Fetching value of define "__AES__" : 1 00:07:17.560 Fetching value of define "__AVX__" : 1 00:07:17.560 Fetching value of define "__AVX2__" : 1 00:07:17.560 Fetching value of define "__AVX512BW__" : (undefined) 00:07:17.560 Fetching value of define "__AVX512CD__" : (undefined) 00:07:17.560 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:17.560 Fetching value of define "__AVX512F__" : (undefined) 00:07:17.560 Fetching value of define "__AVX512VL__" : (undefined) 00:07:17.561 Fetching value of define "__PCLMUL__" : 1 00:07:17.561 Fetching value of define "__RDRND__" : 1 00:07:17.561 Fetching value of define "__RDSEED__" : 1 00:07:17.561 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:17.561 Fetching value of define "__znver1__" : (undefined) 00:07:17.561 Fetching value of define "__znver2__" : (undefined) 00:07:17.561 Fetching value of define "__znver3__" : (undefined) 00:07:17.561 Fetching value of define "__znver4__" : (undefined) 00:07:17.561 Library asan found: YES 00:07:17.561 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:17.561 Message: lib/log: Defining dependency "log" 00:07:17.561 Message: lib/kvargs: Defining dependency "kvargs" 00:07:17.561 Message: lib/telemetry: Defining dependency "telemetry" 00:07:17.561 Library rt found: YES 00:07:17.561 Checking for function "getentropy" : NO 00:07:17.561 Message: lib/eal: Defining dependency "eal" 00:07:17.561 Message: lib/ring: Defining dependency "ring" 00:07:17.561 Message: lib/rcu: Defining dependency "rcu" 00:07:17.561 Message: lib/mempool: Defining dependency "mempool" 00:07:17.561 Message: lib/mbuf: Defining dependency "mbuf" 00:07:17.561 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:17.561 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:17.561 Compiler for C supports arguments -mpclmul: YES 00:07:17.561 Compiler for C supports arguments -maes: YES 00:07:17.561 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:17.561 Compiler for C supports arguments -mavx512bw: YES 00:07:17.561 Compiler for C supports arguments -mavx512dq: YES 00:07:17.561 Compiler for C supports arguments -mavx512vl: YES 00:07:17.561 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:17.561 Compiler for C supports arguments -mavx2: YES 00:07:17.561 Compiler for C supports arguments -mavx: YES 00:07:17.561 Message: lib/net: Defining dependency "net" 00:07:17.561 Message: lib/meter: Defining dependency "meter" 00:07:17.561 Message: lib/ethdev: Defining dependency "ethdev" 00:07:17.561 Message: lib/pci: Defining dependency "pci" 00:07:17.561 Message: lib/cmdline: Defining dependency "cmdline" 00:07:17.561 Message: lib/hash: Defining dependency "hash" 00:07:17.561 Message: lib/timer: Defining dependency "timer" 00:07:17.561 Message: lib/compressdev: Defining dependency "compressdev" 00:07:17.561 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:17.561 Message: lib/dmadev: Defining dependency "dmadev" 00:07:17.561 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:17.561 Message: lib/power: Defining dependency "power" 00:07:17.561 Message: lib/reorder: Defining dependency "reorder" 00:07:17.561 Message: lib/security: Defining dependency "security" 00:07:17.561 Has header "linux/userfaultfd.h" : YES 00:07:17.561 Has header "linux/vduse.h" : YES 00:07:17.561 Message: lib/vhost: Defining dependency "vhost" 00:07:17.561 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:17.561 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:17.561 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:17.561 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:17.561 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:17.561 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:17.561 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:17.561 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:17.561 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:17.561 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:17.561 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:17.561 Configuring doxy-api-html.conf using configuration 00:07:17.561 Configuring doxy-api-man.conf using configuration 00:07:17.561 Program mandb found: YES (/usr/bin/mandb) 00:07:17.561 Program sphinx-build found: NO 00:07:17.561 Configuring rte_build_config.h using configuration 00:07:17.561 Message: 00:07:17.561 ================= 00:07:17.561 Applications Enabled 00:07:17.561 ================= 00:07:17.561 00:07:17.561 apps: 00:07:17.561 00:07:17.561 00:07:17.561 Message: 00:07:17.561 ================= 00:07:17.561 Libraries Enabled 00:07:17.561 ================= 00:07:17.561 00:07:17.561 libs: 00:07:17.561 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:17.561 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:17.561 cryptodev, dmadev, power, reorder, security, vhost, 00:07:17.561 00:07:17.561 Message: 00:07:17.561 =============== 00:07:17.561 Drivers Enabled 00:07:17.561 =============== 00:07:17.561 00:07:17.561 common: 00:07:17.561 00:07:17.561 bus: 00:07:17.561 pci, vdev, 00:07:17.561 mempool: 00:07:17.561 ring, 00:07:17.561 dma: 00:07:17.561 00:07:17.561 net: 00:07:17.561 00:07:17.561 crypto: 00:07:17.561 00:07:17.561 compress: 00:07:17.561 00:07:17.561 vdpa: 00:07:17.561 00:07:17.561 00:07:17.561 Message: 00:07:17.561 ================= 00:07:17.561 Content Skipped 00:07:17.561 ================= 00:07:17.561 00:07:17.561 apps: 00:07:17.561 dumpcap: explicitly disabled via build config 00:07:17.561 graph: explicitly disabled via build config 00:07:17.561 pdump: explicitly disabled via build config 00:07:17.561 proc-info: explicitly disabled via build config 00:07:17.561 test-acl: explicitly disabled via build config 00:07:17.561 test-bbdev: explicitly disabled via build config 00:07:17.561 test-cmdline: explicitly disabled via build config 00:07:17.561 test-compress-perf: explicitly disabled via build config 00:07:17.561 test-crypto-perf: explicitly disabled via build config 00:07:17.561 test-dma-perf: explicitly disabled via build config 00:07:17.561 test-eventdev: explicitly disabled via build config 00:07:17.561 test-fib: explicitly disabled via build config 00:07:17.561 test-flow-perf: explicitly disabled via build config 00:07:17.561 test-gpudev: explicitly disabled via build config 00:07:17.561 test-mldev: explicitly disabled via build config 00:07:17.561 test-pipeline: explicitly disabled via build config 00:07:17.562 test-pmd: explicitly disabled via build config 00:07:17.562 test-regex: explicitly disabled via build config 00:07:17.562 test-sad: explicitly disabled via build config 00:07:17.562 test-security-perf: explicitly disabled via build config 00:07:17.562 00:07:17.562 libs: 00:07:17.562 argparse: explicitly disabled via build config 00:07:17.562 metrics: explicitly disabled via build config 00:07:17.562 acl: explicitly disabled via build config 00:07:17.562 bbdev: explicitly disabled via build config 00:07:17.562 bitratestats: explicitly disabled via build config 00:07:17.562 bpf: explicitly disabled via build config 00:07:17.562 cfgfile: explicitly disabled via build config 00:07:17.562 distributor: explicitly disabled via build config 00:07:17.562 efd: explicitly disabled via build config 00:07:17.562 eventdev: explicitly disabled via build config 00:07:17.562 dispatcher: explicitly disabled via build config 00:07:17.562 gpudev: explicitly disabled via build config 00:07:17.562 gro: explicitly disabled via build config 00:07:17.562 gso: explicitly disabled via build config 00:07:17.562 ip_frag: explicitly disabled via build config 00:07:17.562 jobstats: explicitly disabled via build config 00:07:17.562 latencystats: explicitly disabled via build config 00:07:17.562 lpm: explicitly disabled via build config 00:07:17.562 member: explicitly disabled via build config 00:07:17.562 pcapng: explicitly disabled via build config 00:07:17.562 rawdev: explicitly disabled via build config 00:07:17.562 regexdev: explicitly disabled via build config 00:07:17.562 mldev: explicitly disabled via build config 00:07:17.562 rib: explicitly disabled via build config 00:07:17.562 sched: explicitly disabled via build config 00:07:17.562 stack: explicitly disabled via build config 00:07:17.562 ipsec: explicitly disabled via build config 00:07:17.562 pdcp: explicitly disabled via build config 00:07:17.562 fib: explicitly disabled via build config 00:07:17.562 port: explicitly disabled via build config 00:07:17.562 pdump: explicitly disabled via build config 00:07:17.562 table: explicitly disabled via build config 00:07:17.562 pipeline: explicitly disabled via build config 00:07:17.562 graph: explicitly disabled via build config 00:07:17.562 node: explicitly disabled via build config 00:07:17.562 00:07:17.562 drivers: 00:07:17.562 common/cpt: not in enabled drivers build config 00:07:17.562 common/dpaax: not in enabled drivers build config 00:07:17.562 common/iavf: not in enabled drivers build config 00:07:17.562 common/idpf: not in enabled drivers build config 00:07:17.562 common/ionic: not in enabled drivers build config 00:07:17.562 common/mvep: not in enabled drivers build config 00:07:17.562 common/octeontx: not in enabled drivers build config 00:07:17.562 bus/auxiliary: not in enabled drivers build config 00:07:17.562 bus/cdx: not in enabled drivers build config 00:07:17.562 bus/dpaa: not in enabled drivers build config 00:07:17.562 bus/fslmc: not in enabled drivers build config 00:07:17.562 bus/ifpga: not in enabled drivers build config 00:07:17.562 bus/platform: not in enabled drivers build config 00:07:17.562 bus/uacce: not in enabled drivers build config 00:07:17.562 bus/vmbus: not in enabled drivers build config 00:07:17.562 common/cnxk: not in enabled drivers build config 00:07:17.562 common/mlx5: not in enabled drivers build config 00:07:17.562 common/nfp: not in enabled drivers build config 00:07:17.562 common/nitrox: not in enabled drivers build config 00:07:17.562 common/qat: not in enabled drivers build config 00:07:17.562 common/sfc_efx: not in enabled drivers build config 00:07:17.562 mempool/bucket: not in enabled drivers build config 00:07:17.562 mempool/cnxk: not in enabled drivers build config 00:07:17.562 mempool/dpaa: not in enabled drivers build config 00:07:17.562 mempool/dpaa2: not in enabled drivers build config 00:07:17.562 mempool/octeontx: not in enabled drivers build config 00:07:17.562 mempool/stack: not in enabled drivers build config 00:07:17.562 dma/cnxk: not in enabled drivers build config 00:07:17.562 dma/dpaa: not in enabled drivers build config 00:07:17.562 dma/dpaa2: not in enabled drivers build config 00:07:17.562 dma/hisilicon: not in enabled drivers build config 00:07:17.562 dma/idxd: not in enabled drivers build config 00:07:17.562 dma/ioat: not in enabled drivers build config 00:07:17.562 dma/skeleton: not in enabled drivers build config 00:07:17.562 net/af_packet: not in enabled drivers build config 00:07:17.562 net/af_xdp: not in enabled drivers build config 00:07:17.562 net/ark: not in enabled drivers build config 00:07:17.562 net/atlantic: not in enabled drivers build config 00:07:17.562 net/avp: not in enabled drivers build config 00:07:17.562 net/axgbe: not in enabled drivers build config 00:07:17.562 net/bnx2x: not in enabled drivers build config 00:07:17.562 net/bnxt: not in enabled drivers build config 00:07:17.562 net/bonding: not in enabled drivers build config 00:07:17.562 net/cnxk: not in enabled drivers build config 00:07:17.562 net/cpfl: not in enabled drivers build config 00:07:17.562 net/cxgbe: not in enabled drivers build config 00:07:17.562 net/dpaa: not in enabled drivers build config 00:07:17.562 net/dpaa2: not in enabled drivers build config 00:07:17.562 net/e1000: not in enabled drivers build config 00:07:17.562 net/ena: not in enabled drivers build config 00:07:17.562 net/enetc: not in enabled drivers build config 00:07:17.562 net/enetfec: not in enabled drivers build config 00:07:17.562 net/enic: not in enabled drivers build config 00:07:17.562 net/failsafe: not in enabled drivers build config 00:07:17.562 net/fm10k: not in enabled drivers build config 00:07:17.562 net/gve: not in enabled drivers build config 00:07:17.562 net/hinic: not in enabled drivers build config 00:07:17.562 net/hns3: not in enabled drivers build config 00:07:17.562 net/i40e: not in enabled drivers build config 00:07:17.562 net/iavf: not in enabled drivers build config 00:07:17.562 net/ice: not in enabled drivers build config 00:07:17.562 net/idpf: not in enabled drivers build config 00:07:17.562 net/igc: not in enabled drivers build config 00:07:17.562 net/ionic: not in enabled drivers build config 00:07:17.562 net/ipn3ke: not in enabled drivers build config 00:07:17.562 net/ixgbe: not in enabled drivers build config 00:07:17.562 net/mana: not in enabled drivers build config 00:07:17.562 net/memif: not in enabled drivers build config 00:07:17.562 net/mlx4: not in enabled drivers build config 00:07:17.562 net/mlx5: not in enabled drivers build config 00:07:17.562 net/mvneta: not in enabled drivers build config 00:07:17.563 net/mvpp2: not in enabled drivers build config 00:07:17.563 net/netvsc: not in enabled drivers build config 00:07:17.563 net/nfb: not in enabled drivers build config 00:07:17.563 net/nfp: not in enabled drivers build config 00:07:17.563 net/ngbe: not in enabled drivers build config 00:07:17.563 net/null: not in enabled drivers build config 00:07:17.563 net/octeontx: not in enabled drivers build config 00:07:17.563 net/octeon_ep: not in enabled drivers build config 00:07:17.563 net/pcap: not in enabled drivers build config 00:07:17.563 net/pfe: not in enabled drivers build config 00:07:17.563 net/qede: not in enabled drivers build config 00:07:17.563 net/ring: not in enabled drivers build config 00:07:17.563 net/sfc: not in enabled drivers build config 00:07:17.563 net/softnic: not in enabled drivers build config 00:07:17.563 net/tap: not in enabled drivers build config 00:07:17.563 net/thunderx: not in enabled drivers build config 00:07:17.563 net/txgbe: not in enabled drivers build config 00:07:17.563 net/vdev_netvsc: not in enabled drivers build config 00:07:17.563 net/vhost: not in enabled drivers build config 00:07:17.563 net/virtio: not in enabled drivers build config 00:07:17.563 net/vmxnet3: not in enabled drivers build config 00:07:17.563 raw/*: missing internal dependency, "rawdev" 00:07:17.563 crypto/armv8: not in enabled drivers build config 00:07:17.563 crypto/bcmfs: not in enabled drivers build config 00:07:17.563 crypto/caam_jr: not in enabled drivers build config 00:07:17.563 crypto/ccp: not in enabled drivers build config 00:07:17.563 crypto/cnxk: not in enabled drivers build config 00:07:17.563 crypto/dpaa_sec: not in enabled drivers build config 00:07:17.563 crypto/dpaa2_sec: not in enabled drivers build config 00:07:17.563 crypto/ipsec_mb: not in enabled drivers build config 00:07:17.563 crypto/mlx5: not in enabled drivers build config 00:07:17.563 crypto/mvsam: not in enabled drivers build config 00:07:17.563 crypto/nitrox: not in enabled drivers build config 00:07:17.563 crypto/null: not in enabled drivers build config 00:07:17.563 crypto/octeontx: not in enabled drivers build config 00:07:17.563 crypto/openssl: not in enabled drivers build config 00:07:17.563 crypto/scheduler: not in enabled drivers build config 00:07:17.563 crypto/uadk: not in enabled drivers build config 00:07:17.563 crypto/virtio: not in enabled drivers build config 00:07:17.563 compress/isal: not in enabled drivers build config 00:07:17.563 compress/mlx5: not in enabled drivers build config 00:07:17.563 compress/nitrox: not in enabled drivers build config 00:07:17.563 compress/octeontx: not in enabled drivers build config 00:07:17.563 compress/zlib: not in enabled drivers build config 00:07:17.563 regex/*: missing internal dependency, "regexdev" 00:07:17.563 ml/*: missing internal dependency, "mldev" 00:07:17.563 vdpa/ifc: not in enabled drivers build config 00:07:17.563 vdpa/mlx5: not in enabled drivers build config 00:07:17.563 vdpa/nfp: not in enabled drivers build config 00:07:17.563 vdpa/sfc: not in enabled drivers build config 00:07:17.563 event/*: missing internal dependency, "eventdev" 00:07:17.563 baseband/*: missing internal dependency, "bbdev" 00:07:17.563 gpu/*: missing internal dependency, "gpudev" 00:07:17.563 00:07:17.563 00:07:17.563 Build targets in project: 85 00:07:17.563 00:07:17.563 DPDK 24.03.0 00:07:17.563 00:07:17.563 User defined options 00:07:17.563 buildtype : debug 00:07:17.563 default_library : shared 00:07:17.563 libdir : lib 00:07:17.563 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:17.563 b_sanitize : address 00:07:17.563 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:17.563 c_link_args : 00:07:17.563 cpu_instruction_set: native 00:07:17.563 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:17.563 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:17.563 enable_docs : false 00:07:17.563 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:07:17.563 enable_kmods : false 00:07:17.563 max_lcores : 128 00:07:17.563 tests : false 00:07:17.563 00:07:17.563 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:18.132 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:18.132 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:18.132 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:18.132 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:18.132 [4/268] Linking static target lib/librte_kvargs.a 00:07:18.132 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:18.132 [6/268] Linking static target lib/librte_log.a 00:07:18.699 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.699 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:18.699 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:18.957 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:18.957 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:18.957 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:18.957 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:18.957 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:18.957 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:19.216 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:19.216 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:19.216 [18/268] Linking static target lib/librte_telemetry.a 00:07:19.216 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:19.216 [20/268] Linking target lib/librte_log.so.24.1 00:07:19.474 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:19.732 [22/268] Linking target lib/librte_kvargs.so.24.1 00:07:19.732 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:19.732 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:19.732 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:19.732 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:19.732 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:19.990 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:19.990 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:19.990 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:19.990 [31/268] Linking target lib/librte_telemetry.so.24.1 00:07:19.990 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:20.249 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:20.249 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:20.249 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:20.249 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:20.508 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:20.508 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:20.508 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:20.767 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:20.767 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:20.767 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:20.767 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:21.025 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:21.025 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:21.025 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:21.284 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:21.284 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:21.284 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:21.284 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:21.284 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:21.542 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:21.542 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:21.801 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:21.801 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:21.801 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:21.801 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:21.801 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:22.059 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:22.059 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:22.059 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:22.059 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:22.318 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:22.318 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:22.318 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:22.577 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:22.577 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:22.577 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:22.835 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:22.835 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:22.835 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:23.093 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:23.093 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:23.093 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:23.093 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:23.093 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:23.093 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:23.351 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:23.351 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:23.351 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:23.351 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:23.609 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:23.609 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:23.609 [84/268] Linking static target lib/librte_ring.a 00:07:23.609 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:23.866 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:23.866 [87/268] Linking static target lib/librte_eal.a 00:07:23.866 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:24.124 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:24.124 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:24.124 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:24.124 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.382 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:24.382 [94/268] Linking static target lib/librte_rcu.a 00:07:24.382 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:24.382 [96/268] Linking static target lib/librte_mempool.a 00:07:24.382 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:24.640 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:24.640 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:24.898 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:24.899 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:24.899 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:24.899 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:24.899 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:25.156 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:25.156 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:25.156 [107/268] Linking static target lib/librte_mbuf.a 00:07:25.416 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:25.416 [109/268] Linking static target lib/librte_net.a 00:07:25.416 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:25.416 [111/268] Linking static target lib/librte_meter.a 00:07:25.678 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.678 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:25.678 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.678 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:25.936 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:26.195 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:26.195 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:26.195 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:26.453 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:26.453 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:26.711 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:26.969 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:26.969 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:27.227 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:27.227 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:27.227 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:27.227 [128/268] Linking static target lib/librte_pci.a 00:07:27.227 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:27.227 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:27.227 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:27.485 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:27.485 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:27.485 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:27.485 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:27.485 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:27.485 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:27.743 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:27.743 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:27.743 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:27.743 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:27.743 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:27.743 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:27.743 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:27.743 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:28.002 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:28.002 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:28.002 [148/268] Linking static target lib/librte_cmdline.a 00:07:28.260 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:28.260 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:28.518 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:28.518 [152/268] Linking static target lib/librte_timer.a 00:07:28.518 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:28.518 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:29.084 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:29.084 [156/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:29.084 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:29.084 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:29.342 [159/268] Linking static target lib/librte_ethdev.a 00:07:29.342 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:29.342 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:29.342 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:29.342 [163/268] Linking static target lib/librte_compressdev.a 00:07:29.342 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:29.342 [165/268] Linking static target lib/librte_hash.a 00:07:29.600 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:29.600 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:29.858 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:29.858 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:29.858 [170/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:29.858 [171/268] Linking static target lib/librte_dmadev.a 00:07:30.117 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:30.117 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:30.382 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:30.382 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:30.640 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:30.640 [177/268] Linking static target lib/librte_cryptodev.a 00:07:30.640 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:30.640 [179/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:30.640 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:30.640 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:30.898 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:30.898 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:30.898 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:31.156 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:31.156 [186/268] Linking static target lib/librte_power.a 00:07:31.414 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:31.672 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:31.672 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:31.672 [190/268] Linking static target lib/librte_reorder.a 00:07:31.672 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:31.672 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:31.672 [193/268] Linking static target lib/librte_security.a 00:07:32.237 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:32.237 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:32.496 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:32.496 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:32.754 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:33.011 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:33.011 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:33.011 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:33.011 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:33.011 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:33.268 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:33.268 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:33.861 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:33.861 [207/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:33.861 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:33.861 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:33.861 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:33.861 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:34.119 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:34.119 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:34.119 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:34.119 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:34.119 [216/268] Linking static target drivers/librte_bus_vdev.a 00:07:34.119 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:34.119 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:34.119 [219/268] Linking static target drivers/librte_bus_pci.a 00:07:34.376 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:34.376 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:34.376 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:34.633 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:34.633 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:34.633 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:34.633 [226/268] Linking static target drivers/librte_mempool_ring.a 00:07:34.890 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.456 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:35.715 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:35.973 [230/268] Linking target lib/librte_eal.so.24.1 00:07:35.973 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:35.973 [232/268] Linking target lib/librte_ring.so.24.1 00:07:35.973 [233/268] Linking target lib/librte_pci.so.24.1 00:07:35.973 [234/268] Linking target lib/librte_meter.so.24.1 00:07:35.973 [235/268] Linking target lib/librte_timer.so.24.1 00:07:36.231 [236/268] Linking target lib/librte_dmadev.so.24.1 00:07:36.231 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:36.231 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:36.231 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:36.231 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:36.231 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:36.231 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:36.231 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:36.231 [244/268] Linking target lib/librte_mempool.so.24.1 00:07:36.231 [245/268] Linking target lib/librte_rcu.so.24.1 00:07:36.489 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:36.490 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:36.490 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:36.490 [249/268] Linking target lib/librte_mbuf.so.24.1 00:07:36.747 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:36.747 [251/268] Linking target lib/librte_reorder.so.24.1 00:07:36.747 [252/268] Linking target lib/librte_net.so.24.1 00:07:36.747 [253/268] Linking target lib/librte_compressdev.so.24.1 00:07:36.747 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:07:36.747 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:36.747 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:37.005 [257/268] Linking target lib/librte_cmdline.so.24.1 00:07:37.005 [258/268] Linking target lib/librte_hash.so.24.1 00:07:37.005 [259/268] Linking target lib/librte_security.so.24.1 00:07:37.005 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:37.263 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:37.521 [262/268] Linking target lib/librte_ethdev.so.24.1 00:07:37.780 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:37.780 [264/268] Linking target lib/librte_power.so.24.1 00:07:39.717 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:39.993 [266/268] Linking static target lib/librte_vhost.a 00:07:41.368 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:41.368 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:41.628 INFO: autodetecting backend as ninja 00:07:41.628 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:08:03.572 CC lib/ut_mock/mock.o 00:08:03.572 CC lib/log/log.o 00:08:03.572 CC lib/log/log_deprecated.o 00:08:03.572 CC lib/log/log_flags.o 00:08:03.572 CC lib/ut/ut.o 00:08:03.572 LIB libspdk_log.a 00:08:03.572 LIB libspdk_ut_mock.a 00:08:03.572 LIB libspdk_ut.a 00:08:03.572 SO libspdk_log.so.7.1 00:08:03.572 SO libspdk_ut_mock.so.6.0 00:08:03.572 SO libspdk_ut.so.2.0 00:08:03.572 SYMLINK libspdk_ut_mock.so 00:08:03.572 SYMLINK libspdk_log.so 00:08:03.572 SYMLINK libspdk_ut.so 00:08:03.572 CC lib/dma/dma.o 00:08:03.572 CXX lib/trace_parser/trace.o 00:08:03.572 CC lib/util/bit_array.o 00:08:03.572 CC lib/util/base64.o 00:08:03.572 CC lib/ioat/ioat.o 00:08:03.572 CC lib/util/crc16.o 00:08:03.572 CC lib/util/cpuset.o 00:08:03.572 CC lib/util/crc32.o 00:08:03.572 CC lib/util/crc32c.o 00:08:03.572 CC lib/vfio_user/host/vfio_user_pci.o 00:08:03.572 CC lib/util/crc32_ieee.o 00:08:03.572 CC lib/vfio_user/host/vfio_user.o 00:08:03.572 CC lib/util/crc64.o 00:08:03.572 CC lib/util/dif.o 00:08:03.572 LIB libspdk_dma.a 00:08:03.572 SO libspdk_dma.so.5.0 00:08:03.572 CC lib/util/fd.o 00:08:03.572 CC lib/util/fd_group.o 00:08:03.572 SYMLINK libspdk_dma.so 00:08:03.572 CC lib/util/file.o 00:08:03.572 CC lib/util/hexlify.o 00:08:03.572 CC lib/util/iov.o 00:08:03.572 CC lib/util/math.o 00:08:03.572 LIB libspdk_vfio_user.a 00:08:03.572 CC lib/util/net.o 00:08:03.572 SO libspdk_vfio_user.so.5.0 00:08:03.572 LIB libspdk_ioat.a 00:08:03.572 SO libspdk_ioat.so.7.0 00:08:03.572 SYMLINK libspdk_vfio_user.so 00:08:03.572 CC lib/util/pipe.o 00:08:03.572 CC lib/util/strerror_tls.o 00:08:03.572 CC lib/util/string.o 00:08:03.572 SYMLINK libspdk_ioat.so 00:08:03.572 CC lib/util/uuid.o 00:08:03.572 CC lib/util/xor.o 00:08:03.572 CC lib/util/zipf.o 00:08:03.572 CC lib/util/md5.o 00:08:03.572 LIB libspdk_util.a 00:08:03.830 SO libspdk_util.so.10.1 00:08:03.830 LIB libspdk_trace_parser.a 00:08:03.830 SO libspdk_trace_parser.so.6.0 00:08:03.830 SYMLINK libspdk_util.so 00:08:04.088 SYMLINK libspdk_trace_parser.so 00:08:04.088 CC lib/idxd/idxd.o 00:08:04.088 CC lib/idxd/idxd_user.o 00:08:04.088 CC lib/vmd/vmd.o 00:08:04.088 CC lib/idxd/idxd_kernel.o 00:08:04.088 CC lib/conf/conf.o 00:08:04.088 CC lib/rdma_utils/rdma_utils.o 00:08:04.088 CC lib/vmd/led.o 00:08:04.088 CC lib/env_dpdk/env.o 00:08:04.088 CC lib/env_dpdk/memory.o 00:08:04.088 CC lib/json/json_parse.o 00:08:04.346 CC lib/json/json_util.o 00:08:04.346 CC lib/json/json_write.o 00:08:04.346 CC lib/env_dpdk/pci.o 00:08:04.346 LIB libspdk_conf.a 00:08:04.605 SO libspdk_conf.so.6.0 00:08:04.605 LIB libspdk_rdma_utils.a 00:08:04.605 CC lib/env_dpdk/init.o 00:08:04.605 SO libspdk_rdma_utils.so.1.0 00:08:04.605 SYMLINK libspdk_conf.so 00:08:04.605 CC lib/env_dpdk/threads.o 00:08:04.605 SYMLINK libspdk_rdma_utils.so 00:08:04.605 CC lib/env_dpdk/pci_ioat.o 00:08:04.605 CC lib/env_dpdk/pci_virtio.o 00:08:04.605 LIB libspdk_json.a 00:08:04.605 SO libspdk_json.so.6.0 00:08:04.605 CC lib/env_dpdk/pci_vmd.o 00:08:04.871 SYMLINK libspdk_json.so 00:08:04.871 CC lib/env_dpdk/pci_idxd.o 00:08:04.871 CC lib/env_dpdk/pci_event.o 00:08:04.871 CC lib/rdma_provider/common.o 00:08:04.871 CC lib/env_dpdk/sigbus_handler.o 00:08:04.871 CC lib/env_dpdk/pci_dpdk.o 00:08:04.871 CC lib/rdma_provider/rdma_provider_verbs.o 00:08:04.871 CC lib/jsonrpc/jsonrpc_server.o 00:08:05.129 LIB libspdk_vmd.a 00:08:05.129 CC lib/env_dpdk/pci_dpdk_2207.o 00:08:05.129 LIB libspdk_idxd.a 00:08:05.129 SO libspdk_vmd.so.6.0 00:08:05.129 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:08:05.129 SO libspdk_idxd.so.12.1 00:08:05.129 CC lib/env_dpdk/pci_dpdk_2211.o 00:08:05.129 SYMLINK libspdk_vmd.so 00:08:05.129 CC lib/jsonrpc/jsonrpc_client.o 00:08:05.129 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:08:05.129 SYMLINK libspdk_idxd.so 00:08:05.129 LIB libspdk_rdma_provider.a 00:08:05.129 SO libspdk_rdma_provider.so.7.0 00:08:05.387 SYMLINK libspdk_rdma_provider.so 00:08:05.387 LIB libspdk_jsonrpc.a 00:08:05.387 SO libspdk_jsonrpc.so.6.0 00:08:05.387 SYMLINK libspdk_jsonrpc.so 00:08:05.954 CC lib/rpc/rpc.o 00:08:05.954 LIB libspdk_rpc.a 00:08:05.954 SO libspdk_rpc.so.6.0 00:08:06.212 SYMLINK libspdk_rpc.so 00:08:06.212 LIB libspdk_env_dpdk.a 00:08:06.212 SO libspdk_env_dpdk.so.15.1 00:08:06.212 CC lib/keyring/keyring.o 00:08:06.212 CC lib/keyring/keyring_rpc.o 00:08:06.212 CC lib/notify/notify.o 00:08:06.212 CC lib/notify/notify_rpc.o 00:08:06.212 CC lib/trace/trace.o 00:08:06.212 CC lib/trace/trace_flags.o 00:08:06.212 CC lib/trace/trace_rpc.o 00:08:06.471 SYMLINK libspdk_env_dpdk.so 00:08:06.471 LIB libspdk_notify.a 00:08:06.471 SO libspdk_notify.so.6.0 00:08:06.471 LIB libspdk_keyring.a 00:08:06.471 SYMLINK libspdk_notify.so 00:08:06.729 SO libspdk_keyring.so.2.0 00:08:06.729 LIB libspdk_trace.a 00:08:06.729 SO libspdk_trace.so.11.0 00:08:06.729 SYMLINK libspdk_keyring.so 00:08:06.729 SYMLINK libspdk_trace.so 00:08:06.988 CC lib/sock/sock.o 00:08:06.988 CC lib/sock/sock_rpc.o 00:08:06.988 CC lib/thread/iobuf.o 00:08:06.988 CC lib/thread/thread.o 00:08:07.556 LIB libspdk_sock.a 00:08:07.556 SO libspdk_sock.so.10.0 00:08:07.556 SYMLINK libspdk_sock.so 00:08:07.815 CC lib/nvme/nvme_ctrlr_cmd.o 00:08:07.815 CC lib/nvme/nvme_ctrlr.o 00:08:07.815 CC lib/nvme/nvme_fabric.o 00:08:07.815 CC lib/nvme/nvme_ns_cmd.o 00:08:07.815 CC lib/nvme/nvme_ns.o 00:08:07.815 CC lib/nvme/nvme_pcie_common.o 00:08:07.815 CC lib/nvme/nvme_qpair.o 00:08:07.815 CC lib/nvme/nvme_pcie.o 00:08:07.815 CC lib/nvme/nvme.o 00:08:08.749 CC lib/nvme/nvme_quirks.o 00:08:08.749 CC lib/nvme/nvme_transport.o 00:08:08.749 CC lib/nvme/nvme_discovery.o 00:08:09.007 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:08:09.007 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:08:09.265 LIB libspdk_thread.a 00:08:09.265 SO libspdk_thread.so.11.0 00:08:09.265 SYMLINK libspdk_thread.so 00:08:09.265 CC lib/nvme/nvme_tcp.o 00:08:09.265 CC lib/nvme/nvme_opal.o 00:08:09.523 CC lib/nvme/nvme_io_msg.o 00:08:09.523 CC lib/nvme/nvme_poll_group.o 00:08:09.523 CC lib/accel/accel.o 00:08:09.523 CC lib/nvme/nvme_zns.o 00:08:09.781 CC lib/nvme/nvme_stubs.o 00:08:09.781 CC lib/blob/blobstore.o 00:08:09.781 CC lib/blob/request.o 00:08:10.039 CC lib/blob/zeroes.o 00:08:10.039 CC lib/accel/accel_rpc.o 00:08:10.297 CC lib/accel/accel_sw.o 00:08:10.297 CC lib/blob/blob_bs_dev.o 00:08:10.297 CC lib/nvme/nvme_auth.o 00:08:10.297 CC lib/nvme/nvme_cuse.o 00:08:10.555 CC lib/init/json_config.o 00:08:10.555 CC lib/virtio/virtio.o 00:08:10.555 CC lib/nvme/nvme_rdma.o 00:08:10.555 CC lib/fsdev/fsdev.o 00:08:10.813 CC lib/init/subsystem.o 00:08:10.813 CC lib/virtio/virtio_vhost_user.o 00:08:10.813 LIB libspdk_accel.a 00:08:11.072 SO libspdk_accel.so.16.0 00:08:11.072 CC lib/fsdev/fsdev_io.o 00:08:11.072 CC lib/init/subsystem_rpc.o 00:08:11.072 SYMLINK libspdk_accel.so 00:08:11.072 CC lib/fsdev/fsdev_rpc.o 00:08:11.072 CC lib/virtio/virtio_vfio_user.o 00:08:11.072 CC lib/init/rpc.o 00:08:11.341 CC lib/virtio/virtio_pci.o 00:08:11.341 LIB libspdk_init.a 00:08:11.341 SO libspdk_init.so.6.0 00:08:11.341 CC lib/bdev/bdev.o 00:08:11.341 CC lib/bdev/bdev_rpc.o 00:08:11.341 CC lib/bdev/bdev_zone.o 00:08:11.341 CC lib/bdev/part.o 00:08:11.599 CC lib/bdev/scsi_nvme.o 00:08:11.599 LIB libspdk_fsdev.a 00:08:11.599 SYMLINK libspdk_init.so 00:08:11.599 SO libspdk_fsdev.so.2.0 00:08:11.599 SYMLINK libspdk_fsdev.so 00:08:11.599 CC lib/event/app.o 00:08:11.599 LIB libspdk_virtio.a 00:08:11.599 CC lib/event/reactor.o 00:08:11.599 CC lib/event/log_rpc.o 00:08:11.858 SO libspdk_virtio.so.7.0 00:08:11.858 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:08:11.858 SYMLINK libspdk_virtio.so 00:08:11.858 CC lib/event/app_rpc.o 00:08:11.858 CC lib/event/scheduler_static.o 00:08:12.504 LIB libspdk_event.a 00:08:12.504 SO libspdk_event.so.14.0 00:08:12.504 SYMLINK libspdk_event.so 00:08:12.504 LIB libspdk_nvme.a 00:08:12.504 LIB libspdk_fuse_dispatcher.a 00:08:12.762 SO libspdk_fuse_dispatcher.so.1.0 00:08:12.762 SYMLINK libspdk_fuse_dispatcher.so 00:08:12.762 SO libspdk_nvme.so.15.0 00:08:13.021 SYMLINK libspdk_nvme.so 00:08:14.395 LIB libspdk_blob.a 00:08:14.395 SO libspdk_blob.so.11.0 00:08:14.395 SYMLINK libspdk_blob.so 00:08:14.653 CC lib/lvol/lvol.o 00:08:14.653 CC lib/blobfs/blobfs.o 00:08:14.653 CC lib/blobfs/tree.o 00:08:15.221 LIB libspdk_bdev.a 00:08:15.221 SO libspdk_bdev.so.17.0 00:08:15.221 SYMLINK libspdk_bdev.so 00:08:15.479 CC lib/nbd/nbd.o 00:08:15.479 CC lib/nbd/nbd_rpc.o 00:08:15.479 CC lib/nvmf/ctrlr.o 00:08:15.479 CC lib/nvmf/ctrlr_discovery.o 00:08:15.479 CC lib/nvmf/ctrlr_bdev.o 00:08:15.479 CC lib/scsi/dev.o 00:08:15.479 CC lib/ftl/ftl_core.o 00:08:15.479 CC lib/ublk/ublk.o 00:08:15.738 LIB libspdk_blobfs.a 00:08:15.738 SO libspdk_blobfs.so.10.0 00:08:15.738 CC lib/ftl/ftl_init.o 00:08:15.738 LIB libspdk_lvol.a 00:08:15.738 SYMLINK libspdk_blobfs.so 00:08:15.738 CC lib/ftl/ftl_layout.o 00:08:15.738 SO libspdk_lvol.so.10.0 00:08:15.996 CC lib/scsi/lun.o 00:08:15.997 SYMLINK libspdk_lvol.so 00:08:15.997 CC lib/scsi/port.o 00:08:15.997 CC lib/ftl/ftl_debug.o 00:08:15.997 CC lib/ublk/ublk_rpc.o 00:08:15.997 CC lib/ftl/ftl_io.o 00:08:16.255 LIB libspdk_nbd.a 00:08:16.255 SO libspdk_nbd.so.7.0 00:08:16.255 CC lib/nvmf/subsystem.o 00:08:16.255 CC lib/ftl/ftl_sb.o 00:08:16.255 SYMLINK libspdk_nbd.so 00:08:16.255 CC lib/ftl/ftl_l2p.o 00:08:16.255 CC lib/ftl/ftl_l2p_flat.o 00:08:16.255 CC lib/scsi/scsi.o 00:08:16.255 CC lib/nvmf/nvmf.o 00:08:16.513 CC lib/ftl/ftl_nv_cache.o 00:08:16.513 CC lib/scsi/scsi_bdev.o 00:08:16.513 CC lib/nvmf/nvmf_rpc.o 00:08:16.513 LIB libspdk_ublk.a 00:08:16.513 CC lib/ftl/ftl_band.o 00:08:16.513 SO libspdk_ublk.so.3.0 00:08:16.513 CC lib/ftl/ftl_band_ops.o 00:08:16.513 CC lib/ftl/ftl_writer.o 00:08:16.513 SYMLINK libspdk_ublk.so 00:08:16.513 CC lib/scsi/scsi_pr.o 00:08:16.772 CC lib/scsi/scsi_rpc.o 00:08:16.772 CC lib/scsi/task.o 00:08:17.030 CC lib/ftl/ftl_rq.o 00:08:17.030 CC lib/nvmf/transport.o 00:08:17.030 CC lib/ftl/ftl_reloc.o 00:08:17.030 CC lib/ftl/ftl_l2p_cache.o 00:08:17.030 LIB libspdk_scsi.a 00:08:17.030 CC lib/nvmf/tcp.o 00:08:17.289 SO libspdk_scsi.so.9.0 00:08:17.289 SYMLINK libspdk_scsi.so 00:08:17.289 CC lib/ftl/ftl_p2l.o 00:08:17.547 CC lib/nvmf/stubs.o 00:08:17.547 CC lib/nvmf/mdns_server.o 00:08:17.547 CC lib/iscsi/conn.o 00:08:17.547 CC lib/iscsi/init_grp.o 00:08:17.805 CC lib/iscsi/iscsi.o 00:08:17.805 CC lib/iscsi/param.o 00:08:17.805 CC lib/iscsi/portal_grp.o 00:08:17.805 CC lib/ftl/ftl_p2l_log.o 00:08:18.063 CC lib/nvmf/rdma.o 00:08:18.063 CC lib/nvmf/auth.o 00:08:18.063 CC lib/iscsi/tgt_node.o 00:08:18.063 CC lib/iscsi/iscsi_subsystem.o 00:08:18.321 CC lib/ftl/mngt/ftl_mngt.o 00:08:18.321 CC lib/iscsi/iscsi_rpc.o 00:08:18.321 CC lib/iscsi/task.o 00:08:18.579 CC lib/vhost/vhost.o 00:08:18.579 CC lib/vhost/vhost_rpc.o 00:08:18.579 CC lib/vhost/vhost_scsi.o 00:08:18.579 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:08:18.579 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:08:18.837 CC lib/ftl/mngt/ftl_mngt_startup.o 00:08:18.837 CC lib/vhost/vhost_blk.o 00:08:18.837 CC lib/ftl/mngt/ftl_mngt_md.o 00:08:19.094 CC lib/ftl/mngt/ftl_mngt_misc.o 00:08:19.094 CC lib/vhost/rte_vhost_user.o 00:08:19.094 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:08:19.353 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:08:19.353 CC lib/ftl/mngt/ftl_mngt_band.o 00:08:19.353 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:08:19.353 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:08:19.353 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:08:19.353 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:08:19.611 CC lib/ftl/utils/ftl_conf.o 00:08:19.611 CC lib/ftl/utils/ftl_md.o 00:08:19.611 CC lib/ftl/utils/ftl_mempool.o 00:08:19.611 LIB libspdk_iscsi.a 00:08:19.611 CC lib/ftl/utils/ftl_bitmap.o 00:08:19.869 CC lib/ftl/utils/ftl_property.o 00:08:19.869 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:08:19.869 SO libspdk_iscsi.so.8.0 00:08:19.869 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:08:19.869 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:08:19.869 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:08:19.869 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:08:19.869 SYMLINK libspdk_iscsi.so 00:08:19.869 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:08:20.127 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:08:20.127 CC lib/ftl/upgrade/ftl_sb_v3.o 00:08:20.127 CC lib/ftl/upgrade/ftl_sb_v5.o 00:08:20.127 CC lib/ftl/nvc/ftl_nvc_dev.o 00:08:20.127 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:08:20.127 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:08:20.127 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:08:20.127 CC lib/ftl/base/ftl_base_dev.o 00:08:20.385 CC lib/ftl/base/ftl_base_bdev.o 00:08:20.385 LIB libspdk_vhost.a 00:08:20.385 CC lib/ftl/ftl_trace.o 00:08:20.385 SO libspdk_vhost.so.8.0 00:08:20.385 SYMLINK libspdk_vhost.so 00:08:20.643 LIB libspdk_ftl.a 00:08:20.902 LIB libspdk_nvmf.a 00:08:20.902 SO libspdk_ftl.so.9.0 00:08:20.902 SO libspdk_nvmf.so.20.0 00:08:21.235 SYMLINK libspdk_ftl.so 00:08:21.235 SYMLINK libspdk_nvmf.so 00:08:21.504 CC module/env_dpdk/env_dpdk_rpc.o 00:08:21.763 CC module/keyring/linux/keyring.o 00:08:21.763 CC module/sock/posix/posix.o 00:08:21.763 CC module/keyring/file/keyring.o 00:08:21.763 CC module/accel/dsa/accel_dsa.o 00:08:21.763 CC module/scheduler/dynamic/scheduler_dynamic.o 00:08:21.763 CC module/blob/bdev/blob_bdev.o 00:08:21.763 CC module/accel/error/accel_error.o 00:08:21.763 CC module/accel/ioat/accel_ioat.o 00:08:21.763 CC module/fsdev/aio/fsdev_aio.o 00:08:21.763 LIB libspdk_env_dpdk_rpc.a 00:08:21.763 SO libspdk_env_dpdk_rpc.so.6.0 00:08:21.763 SYMLINK libspdk_env_dpdk_rpc.so 00:08:21.763 CC module/fsdev/aio/fsdev_aio_rpc.o 00:08:21.763 CC module/keyring/linux/keyring_rpc.o 00:08:21.763 CC module/keyring/file/keyring_rpc.o 00:08:22.021 CC module/accel/ioat/accel_ioat_rpc.o 00:08:22.021 CC module/accel/error/accel_error_rpc.o 00:08:22.021 LIB libspdk_scheduler_dynamic.a 00:08:22.021 SO libspdk_scheduler_dynamic.so.4.0 00:08:22.021 LIB libspdk_keyring_linux.a 00:08:22.021 CC module/accel/dsa/accel_dsa_rpc.o 00:08:22.021 LIB libspdk_keyring_file.a 00:08:22.021 SO libspdk_keyring_linux.so.1.0 00:08:22.021 LIB libspdk_blob_bdev.a 00:08:22.021 SO libspdk_keyring_file.so.2.0 00:08:22.021 SYMLINK libspdk_scheduler_dynamic.so 00:08:22.021 SO libspdk_blob_bdev.so.11.0 00:08:22.021 SYMLINK libspdk_keyring_linux.so 00:08:22.021 LIB libspdk_accel_error.a 00:08:22.021 LIB libspdk_accel_ioat.a 00:08:22.021 SYMLINK libspdk_keyring_file.so 00:08:22.021 CC module/fsdev/aio/linux_aio_mgr.o 00:08:22.021 SYMLINK libspdk_blob_bdev.so 00:08:22.021 SO libspdk_accel_error.so.2.0 00:08:22.021 SO libspdk_accel_ioat.so.6.0 00:08:22.021 LIB libspdk_accel_dsa.a 00:08:22.279 SYMLINK libspdk_accel_ioat.so 00:08:22.279 SO libspdk_accel_dsa.so.5.0 00:08:22.279 SYMLINK libspdk_accel_error.so 00:08:22.279 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:08:22.279 CC module/scheduler/gscheduler/gscheduler.o 00:08:22.279 SYMLINK libspdk_accel_dsa.so 00:08:22.279 CC module/accel/iaa/accel_iaa.o 00:08:22.279 CC module/accel/iaa/accel_iaa_rpc.o 00:08:22.279 LIB libspdk_scheduler_dpdk_governor.a 00:08:22.537 LIB libspdk_scheduler_gscheduler.a 00:08:22.537 CC module/bdev/error/vbdev_error.o 00:08:22.537 SO libspdk_scheduler_dpdk_governor.so.4.0 00:08:22.537 CC module/bdev/delay/vbdev_delay.o 00:08:22.537 SO libspdk_scheduler_gscheduler.so.4.0 00:08:22.537 CC module/bdev/gpt/gpt.o 00:08:22.537 LIB libspdk_accel_iaa.a 00:08:22.537 CC module/blobfs/bdev/blobfs_bdev.o 00:08:22.537 SYMLINK libspdk_scheduler_dpdk_governor.so 00:08:22.537 CC module/bdev/delay/vbdev_delay_rpc.o 00:08:22.537 SYMLINK libspdk_scheduler_gscheduler.so 00:08:22.537 SO libspdk_accel_iaa.so.3.0 00:08:22.537 LIB libspdk_fsdev_aio.a 00:08:22.537 LIB libspdk_sock_posix.a 00:08:22.537 SO libspdk_fsdev_aio.so.1.0 00:08:22.537 SYMLINK libspdk_accel_iaa.so 00:08:22.537 SO libspdk_sock_posix.so.6.0 00:08:22.820 CC module/bdev/lvol/vbdev_lvol.o 00:08:22.820 CC module/bdev/gpt/vbdev_gpt.o 00:08:22.820 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:08:22.820 SYMLINK libspdk_fsdev_aio.so 00:08:22.820 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:08:22.820 CC module/bdev/malloc/bdev_malloc.o 00:08:22.820 CC module/bdev/error/vbdev_error_rpc.o 00:08:22.820 SYMLINK libspdk_sock_posix.so 00:08:22.820 CC module/bdev/null/bdev_null.o 00:08:22.820 LIB libspdk_bdev_error.a 00:08:22.820 CC module/bdev/nvme/bdev_nvme.o 00:08:22.820 LIB libspdk_blobfs_bdev.a 00:08:22.820 LIB libspdk_bdev_delay.a 00:08:22.820 CC module/bdev/passthru/vbdev_passthru.o 00:08:22.820 SO libspdk_bdev_error.so.6.0 00:08:22.820 SO libspdk_blobfs_bdev.so.6.0 00:08:22.820 SO libspdk_bdev_delay.so.6.0 00:08:23.078 LIB libspdk_bdev_gpt.a 00:08:23.078 SYMLINK libspdk_bdev_error.so 00:08:23.078 SYMLINK libspdk_blobfs_bdev.so 00:08:23.078 CC module/bdev/null/bdev_null_rpc.o 00:08:23.078 SYMLINK libspdk_bdev_delay.so 00:08:23.078 SO libspdk_bdev_gpt.so.6.0 00:08:23.078 SYMLINK libspdk_bdev_gpt.so 00:08:23.078 CC module/bdev/split/vbdev_split.o 00:08:23.078 CC module/bdev/malloc/bdev_malloc_rpc.o 00:08:23.078 CC module/bdev/raid/bdev_raid.o 00:08:23.078 LIB libspdk_bdev_null.a 00:08:23.335 SO libspdk_bdev_null.so.6.0 00:08:23.335 CC module/bdev/zone_block/vbdev_zone_block.o 00:08:23.335 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:08:23.335 CC module/bdev/aio/bdev_aio.o 00:08:23.335 LIB libspdk_bdev_lvol.a 00:08:23.335 SYMLINK libspdk_bdev_null.so 00:08:23.335 CC module/bdev/ftl/bdev_ftl.o 00:08:23.335 SO libspdk_bdev_lvol.so.6.0 00:08:23.335 LIB libspdk_bdev_malloc.a 00:08:23.335 SO libspdk_bdev_malloc.so.6.0 00:08:23.335 SYMLINK libspdk_bdev_lvol.so 00:08:23.335 CC module/bdev/ftl/bdev_ftl_rpc.o 00:08:23.335 CC module/bdev/split/vbdev_split_rpc.o 00:08:23.335 LIB libspdk_bdev_passthru.a 00:08:23.592 SYMLINK libspdk_bdev_malloc.so 00:08:23.592 CC module/bdev/raid/bdev_raid_rpc.o 00:08:23.592 CC module/bdev/iscsi/bdev_iscsi.o 00:08:23.592 SO libspdk_bdev_passthru.so.6.0 00:08:23.592 SYMLINK libspdk_bdev_passthru.so 00:08:23.592 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:08:23.592 LIB libspdk_bdev_split.a 00:08:23.592 CC module/bdev/raid/bdev_raid_sb.o 00:08:23.592 LIB libspdk_bdev_ftl.a 00:08:23.592 SO libspdk_bdev_split.so.6.0 00:08:23.592 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:08:23.592 CC module/bdev/aio/bdev_aio_rpc.o 00:08:23.592 SO libspdk_bdev_ftl.so.6.0 00:08:23.850 SYMLINK libspdk_bdev_split.so 00:08:23.850 CC module/bdev/nvme/nvme_rpc.o 00:08:23.850 CC module/bdev/nvme/bdev_nvme_rpc.o 00:08:23.850 SYMLINK libspdk_bdev_ftl.so 00:08:23.850 CC module/bdev/nvme/bdev_mdns_client.o 00:08:23.850 LIB libspdk_bdev_zone_block.a 00:08:23.850 LIB libspdk_bdev_aio.a 00:08:23.850 CC module/bdev/virtio/bdev_virtio_scsi.o 00:08:23.850 SO libspdk_bdev_zone_block.so.6.0 00:08:23.850 SO libspdk_bdev_aio.so.6.0 00:08:23.850 LIB libspdk_bdev_iscsi.a 00:08:23.850 SO libspdk_bdev_iscsi.so.6.0 00:08:24.108 SYMLINK libspdk_bdev_aio.so 00:08:24.108 SYMLINK libspdk_bdev_zone_block.so 00:08:24.108 CC module/bdev/nvme/vbdev_opal.o 00:08:24.108 CC module/bdev/nvme/vbdev_opal_rpc.o 00:08:24.108 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:08:24.108 CC module/bdev/virtio/bdev_virtio_blk.o 00:08:24.108 SYMLINK libspdk_bdev_iscsi.so 00:08:24.108 CC module/bdev/raid/raid0.o 00:08:24.108 CC module/bdev/virtio/bdev_virtio_rpc.o 00:08:24.108 CC module/bdev/raid/raid1.o 00:08:24.108 CC module/bdev/raid/concat.o 00:08:24.366 CC module/bdev/raid/raid5f.o 00:08:24.625 LIB libspdk_bdev_virtio.a 00:08:24.625 SO libspdk_bdev_virtio.so.6.0 00:08:24.625 SYMLINK libspdk_bdev_virtio.so 00:08:24.884 LIB libspdk_bdev_raid.a 00:08:25.144 SO libspdk_bdev_raid.so.6.0 00:08:25.144 SYMLINK libspdk_bdev_raid.so 00:08:26.521 LIB libspdk_bdev_nvme.a 00:08:26.521 SO libspdk_bdev_nvme.so.7.1 00:08:26.521 SYMLINK libspdk_bdev_nvme.so 00:08:27.086 CC module/event/subsystems/scheduler/scheduler.o 00:08:27.086 CC module/event/subsystems/sock/sock.o 00:08:27.086 CC module/event/subsystems/vmd/vmd.o 00:08:27.086 CC module/event/subsystems/vmd/vmd_rpc.o 00:08:27.086 CC module/event/subsystems/iobuf/iobuf.o 00:08:27.086 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:08:27.086 CC module/event/subsystems/keyring/keyring.o 00:08:27.086 CC module/event/subsystems/fsdev/fsdev.o 00:08:27.086 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:08:27.086 LIB libspdk_event_keyring.a 00:08:27.086 LIB libspdk_event_scheduler.a 00:08:27.086 LIB libspdk_event_fsdev.a 00:08:27.086 LIB libspdk_event_sock.a 00:08:27.086 LIB libspdk_event_vmd.a 00:08:27.086 LIB libspdk_event_vhost_blk.a 00:08:27.086 SO libspdk_event_scheduler.so.4.0 00:08:27.086 LIB libspdk_event_iobuf.a 00:08:27.086 SO libspdk_event_keyring.so.1.0 00:08:27.086 SO libspdk_event_fsdev.so.1.0 00:08:27.086 SO libspdk_event_sock.so.5.0 00:08:27.086 SO libspdk_event_vmd.so.6.0 00:08:27.086 SO libspdk_event_vhost_blk.so.3.0 00:08:27.086 SO libspdk_event_iobuf.so.3.0 00:08:27.345 SYMLINK libspdk_event_scheduler.so 00:08:27.345 SYMLINK libspdk_event_fsdev.so 00:08:27.345 SYMLINK libspdk_event_keyring.so 00:08:27.345 SYMLINK libspdk_event_sock.so 00:08:27.345 SYMLINK libspdk_event_vmd.so 00:08:27.345 SYMLINK libspdk_event_vhost_blk.so 00:08:27.345 SYMLINK libspdk_event_iobuf.so 00:08:27.605 CC module/event/subsystems/accel/accel.o 00:08:27.605 LIB libspdk_event_accel.a 00:08:27.863 SO libspdk_event_accel.so.6.0 00:08:27.863 SYMLINK libspdk_event_accel.so 00:08:28.121 CC module/event/subsystems/bdev/bdev.o 00:08:28.380 LIB libspdk_event_bdev.a 00:08:28.380 SO libspdk_event_bdev.so.6.0 00:08:28.380 SYMLINK libspdk_event_bdev.so 00:08:28.638 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:08:28.638 CC module/event/subsystems/ublk/ublk.o 00:08:28.638 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:08:28.638 CC module/event/subsystems/scsi/scsi.o 00:08:28.638 CC module/event/subsystems/nbd/nbd.o 00:08:28.931 LIB libspdk_event_ublk.a 00:08:28.931 SO libspdk_event_ublk.so.3.0 00:08:28.931 LIB libspdk_event_scsi.a 00:08:28.931 LIB libspdk_event_nbd.a 00:08:28.931 SO libspdk_event_scsi.so.6.0 00:08:28.931 SO libspdk_event_nbd.so.6.0 00:08:28.931 SYMLINK libspdk_event_ublk.so 00:08:28.931 SYMLINK libspdk_event_nbd.so 00:08:28.931 SYMLINK libspdk_event_scsi.so 00:08:28.931 LIB libspdk_event_nvmf.a 00:08:28.931 SO libspdk_event_nvmf.so.6.0 00:08:29.189 SYMLINK libspdk_event_nvmf.so 00:08:29.189 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:08:29.189 CC module/event/subsystems/iscsi/iscsi.o 00:08:29.448 LIB libspdk_event_vhost_scsi.a 00:08:29.448 SO libspdk_event_vhost_scsi.so.3.0 00:08:29.448 SYMLINK libspdk_event_vhost_scsi.so 00:08:29.448 LIB libspdk_event_iscsi.a 00:08:29.448 SO libspdk_event_iscsi.so.6.0 00:08:29.705 SYMLINK libspdk_event_iscsi.so 00:08:29.705 SO libspdk.so.6.0 00:08:29.705 SYMLINK libspdk.so 00:08:29.963 CXX app/trace/trace.o 00:08:29.963 CC app/trace_record/trace_record.o 00:08:29.963 CC app/iscsi_tgt/iscsi_tgt.o 00:08:29.963 CC app/nvmf_tgt/nvmf_main.o 00:08:29.963 CC app/spdk_tgt/spdk_tgt.o 00:08:29.963 CC examples/interrupt_tgt/interrupt_tgt.o 00:08:30.222 CC examples/ioat/perf/perf.o 00:08:30.222 CC test/thread/poller_perf/poller_perf.o 00:08:30.222 CC test/dma/test_dma/test_dma.o 00:08:30.222 CC examples/util/zipf/zipf.o 00:08:30.222 LINK iscsi_tgt 00:08:30.222 LINK interrupt_tgt 00:08:30.222 LINK spdk_tgt 00:08:30.222 LINK nvmf_tgt 00:08:30.222 LINK spdk_trace_record 00:08:30.222 LINK poller_perf 00:08:30.480 LINK zipf 00:08:30.480 LINK ioat_perf 00:08:30.480 LINK spdk_trace 00:08:30.480 CC app/spdk_lspci/spdk_lspci.o 00:08:30.480 CC examples/ioat/verify/verify.o 00:08:30.480 CC app/spdk_nvme_perf/perf.o 00:08:30.738 CC app/spdk_nvme_discover/discovery_aer.o 00:08:30.738 CC app/spdk_nvme_identify/identify.o 00:08:30.738 CC app/spdk_top/spdk_top.o 00:08:30.738 CC app/spdk_dd/spdk_dd.o 00:08:30.738 LINK spdk_lspci 00:08:30.738 CC test/app/bdev_svc/bdev_svc.o 00:08:30.738 LINK verify 00:08:30.738 LINK test_dma 00:08:30.997 LINK spdk_nvme_discover 00:08:30.997 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:08:30.997 LINK bdev_svc 00:08:31.255 CC examples/thread/thread/thread_ex.o 00:08:31.255 LINK spdk_dd 00:08:31.255 CC app/fio/nvme/fio_plugin.o 00:08:31.255 CC app/vhost/vhost.o 00:08:31.255 CC examples/sock/hello_world/hello_sock.o 00:08:31.514 LINK vhost 00:08:31.514 LINK nvme_fuzz 00:08:31.514 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:08:31.514 LINK thread 00:08:31.514 LINK hello_sock 00:08:31.772 CC examples/vmd/lsvmd/lsvmd.o 00:08:31.772 LINK spdk_nvme_perf 00:08:31.772 LINK spdk_nvme_identify 00:08:31.772 LINK lsvmd 00:08:31.772 CC examples/vmd/led/led.o 00:08:31.772 CC app/fio/bdev/fio_plugin.o 00:08:31.772 LINK spdk_top 00:08:31.772 CC examples/idxd/perf/perf.o 00:08:32.029 CC examples/accel/perf/accel_perf.o 00:08:32.029 LINK led 00:08:32.029 LINK spdk_nvme 00:08:32.029 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:08:32.309 CC examples/nvme/hello_world/hello_world.o 00:08:32.309 CC examples/blob/hello_world/hello_blob.o 00:08:32.309 CC examples/nvme/reconnect/reconnect.o 00:08:32.309 CC examples/fsdev/hello_world/hello_fsdev.o 00:08:32.309 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:08:32.309 CC examples/nvme/nvme_manage/nvme_manage.o 00:08:32.309 LINK idxd_perf 00:08:32.587 LINK hello_world 00:08:32.587 LINK hello_blob 00:08:32.587 LINK spdk_bdev 00:08:32.587 CC examples/nvme/arbitration/arbitration.o 00:08:32.587 LINK hello_fsdev 00:08:32.587 LINK reconnect 00:08:32.587 LINK accel_perf 00:08:32.587 CC examples/nvme/hotplug/hotplug.o 00:08:32.846 TEST_HEADER include/spdk/accel.h 00:08:32.846 TEST_HEADER include/spdk/accel_module.h 00:08:32.846 TEST_HEADER include/spdk/assert.h 00:08:32.846 TEST_HEADER include/spdk/barrier.h 00:08:32.846 TEST_HEADER include/spdk/base64.h 00:08:32.846 TEST_HEADER include/spdk/bdev.h 00:08:32.846 TEST_HEADER include/spdk/bdev_module.h 00:08:32.846 TEST_HEADER include/spdk/bdev_zone.h 00:08:32.846 TEST_HEADER include/spdk/bit_array.h 00:08:32.846 TEST_HEADER include/spdk/bit_pool.h 00:08:32.846 TEST_HEADER include/spdk/blob_bdev.h 00:08:32.846 TEST_HEADER include/spdk/blobfs_bdev.h 00:08:32.846 TEST_HEADER include/spdk/blobfs.h 00:08:32.846 TEST_HEADER include/spdk/blob.h 00:08:32.846 TEST_HEADER include/spdk/conf.h 00:08:32.846 TEST_HEADER include/spdk/config.h 00:08:32.846 TEST_HEADER include/spdk/cpuset.h 00:08:32.846 TEST_HEADER include/spdk/crc16.h 00:08:32.846 TEST_HEADER include/spdk/crc32.h 00:08:32.846 TEST_HEADER include/spdk/crc64.h 00:08:32.846 TEST_HEADER include/spdk/dif.h 00:08:32.846 TEST_HEADER include/spdk/dma.h 00:08:32.846 TEST_HEADER include/spdk/endian.h 00:08:32.846 TEST_HEADER include/spdk/env_dpdk.h 00:08:32.846 TEST_HEADER include/spdk/env.h 00:08:32.846 TEST_HEADER include/spdk/event.h 00:08:32.846 TEST_HEADER include/spdk/fd_group.h 00:08:32.846 TEST_HEADER include/spdk/fd.h 00:08:32.846 TEST_HEADER include/spdk/file.h 00:08:32.846 TEST_HEADER include/spdk/fsdev.h 00:08:32.846 TEST_HEADER include/spdk/fsdev_module.h 00:08:32.846 TEST_HEADER include/spdk/ftl.h 00:08:32.846 TEST_HEADER include/spdk/fuse_dispatcher.h 00:08:32.846 LINK vhost_fuzz 00:08:32.846 TEST_HEADER include/spdk/gpt_spec.h 00:08:32.846 TEST_HEADER include/spdk/hexlify.h 00:08:32.846 TEST_HEADER include/spdk/histogram_data.h 00:08:32.846 TEST_HEADER include/spdk/idxd.h 00:08:32.846 CC examples/blob/cli/blobcli.o 00:08:32.846 TEST_HEADER include/spdk/idxd_spec.h 00:08:32.846 TEST_HEADER include/spdk/init.h 00:08:32.846 TEST_HEADER include/spdk/ioat.h 00:08:32.846 TEST_HEADER include/spdk/ioat_spec.h 00:08:32.846 TEST_HEADER include/spdk/iscsi_spec.h 00:08:32.846 TEST_HEADER include/spdk/json.h 00:08:32.846 TEST_HEADER include/spdk/jsonrpc.h 00:08:32.846 TEST_HEADER include/spdk/keyring.h 00:08:32.846 TEST_HEADER include/spdk/keyring_module.h 00:08:32.846 TEST_HEADER include/spdk/likely.h 00:08:32.846 TEST_HEADER include/spdk/log.h 00:08:32.846 TEST_HEADER include/spdk/lvol.h 00:08:32.846 TEST_HEADER include/spdk/md5.h 00:08:32.846 TEST_HEADER include/spdk/memory.h 00:08:32.846 TEST_HEADER include/spdk/mmio.h 00:08:32.846 TEST_HEADER include/spdk/nbd.h 00:08:32.846 TEST_HEADER include/spdk/net.h 00:08:32.846 TEST_HEADER include/spdk/notify.h 00:08:32.846 TEST_HEADER include/spdk/nvme.h 00:08:32.846 TEST_HEADER include/spdk/nvme_intel.h 00:08:32.846 TEST_HEADER include/spdk/nvme_ocssd.h 00:08:32.846 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:08:32.846 TEST_HEADER include/spdk/nvme_spec.h 00:08:32.847 TEST_HEADER include/spdk/nvme_zns.h 00:08:32.847 TEST_HEADER include/spdk/nvmf_cmd.h 00:08:32.847 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:08:32.847 TEST_HEADER include/spdk/nvmf.h 00:08:32.847 TEST_HEADER include/spdk/nvmf_spec.h 00:08:32.847 TEST_HEADER include/spdk/nvmf_transport.h 00:08:32.847 TEST_HEADER include/spdk/opal.h 00:08:32.847 TEST_HEADER include/spdk/opal_spec.h 00:08:32.847 TEST_HEADER include/spdk/pci_ids.h 00:08:32.847 TEST_HEADER include/spdk/pipe.h 00:08:32.847 TEST_HEADER include/spdk/queue.h 00:08:32.847 TEST_HEADER include/spdk/reduce.h 00:08:32.847 TEST_HEADER include/spdk/rpc.h 00:08:32.847 TEST_HEADER include/spdk/scheduler.h 00:08:32.847 TEST_HEADER include/spdk/scsi.h 00:08:32.847 TEST_HEADER include/spdk/scsi_spec.h 00:08:32.847 TEST_HEADER include/spdk/sock.h 00:08:32.847 TEST_HEADER include/spdk/stdinc.h 00:08:32.847 TEST_HEADER include/spdk/string.h 00:08:32.847 TEST_HEADER include/spdk/thread.h 00:08:32.847 TEST_HEADER include/spdk/trace.h 00:08:32.847 TEST_HEADER include/spdk/trace_parser.h 00:08:32.847 TEST_HEADER include/spdk/tree.h 00:08:32.847 TEST_HEADER include/spdk/ublk.h 00:08:32.847 TEST_HEADER include/spdk/util.h 00:08:32.847 TEST_HEADER include/spdk/uuid.h 00:08:32.847 TEST_HEADER include/spdk/version.h 00:08:32.847 TEST_HEADER include/spdk/vfio_user_pci.h 00:08:32.847 LINK nvme_manage 00:08:32.847 CC examples/nvme/cmb_copy/cmb_copy.o 00:08:32.847 TEST_HEADER include/spdk/vfio_user_spec.h 00:08:32.847 TEST_HEADER include/spdk/vhost.h 00:08:32.847 TEST_HEADER include/spdk/vmd.h 00:08:32.847 TEST_HEADER include/spdk/xor.h 00:08:32.847 TEST_HEADER include/spdk/zipf.h 00:08:33.105 CXX test/cpp_headers/accel.o 00:08:33.105 CC test/app/histogram_perf/histogram_perf.o 00:08:33.105 CC test/app/jsoncat/jsoncat.o 00:08:33.105 LINK hotplug 00:08:33.105 LINK arbitration 00:08:33.105 CC test/app/stub/stub.o 00:08:33.105 CXX test/cpp_headers/accel_module.o 00:08:33.105 LINK histogram_perf 00:08:33.105 CXX test/cpp_headers/assert.o 00:08:33.105 LINK jsoncat 00:08:33.105 LINK cmb_copy 00:08:33.105 CXX test/cpp_headers/barrier.o 00:08:33.362 CC examples/nvme/abort/abort.o 00:08:33.362 LINK stub 00:08:33.362 CXX test/cpp_headers/base64.o 00:08:33.362 CXX test/cpp_headers/bdev.o 00:08:33.362 CXX test/cpp_headers/bdev_module.o 00:08:33.362 CXX test/cpp_headers/bdev_zone.o 00:08:33.362 CXX test/cpp_headers/bit_array.o 00:08:33.362 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:08:33.619 CXX test/cpp_headers/bit_pool.o 00:08:33.619 LINK blobcli 00:08:33.619 CXX test/cpp_headers/blob_bdev.o 00:08:33.619 CXX test/cpp_headers/blobfs_bdev.o 00:08:33.619 LINK pmr_persistence 00:08:33.619 CC test/env/vtophys/vtophys.o 00:08:33.619 CC test/env/mem_callbacks/mem_callbacks.o 00:08:33.619 CXX test/cpp_headers/blobfs.o 00:08:33.877 CXX test/cpp_headers/blob.o 00:08:33.877 LINK abort 00:08:33.877 CXX test/cpp_headers/conf.o 00:08:33.877 LINK iscsi_fuzz 00:08:33.877 CXX test/cpp_headers/config.o 00:08:33.877 LINK vtophys 00:08:33.877 CXX test/cpp_headers/cpuset.o 00:08:33.877 CC examples/bdev/hello_world/hello_bdev.o 00:08:33.877 CC examples/bdev/bdevperf/bdevperf.o 00:08:33.877 CXX test/cpp_headers/crc16.o 00:08:34.135 CXX test/cpp_headers/crc32.o 00:08:34.135 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:08:34.135 CC test/env/memory/memory_ut.o 00:08:34.135 CXX test/cpp_headers/crc64.o 00:08:34.135 CC test/rpc_client/rpc_client_test.o 00:08:34.135 LINK hello_bdev 00:08:34.135 LINK env_dpdk_post_init 00:08:34.394 CC test/event/event_perf/event_perf.o 00:08:34.394 CC test/env/pci/pci_ut.o 00:08:34.394 CC test/nvme/aer/aer.o 00:08:34.394 LINK mem_callbacks 00:08:34.394 CXX test/cpp_headers/dif.o 00:08:34.394 LINK rpc_client_test 00:08:34.394 CXX test/cpp_headers/dma.o 00:08:34.394 LINK event_perf 00:08:34.394 CXX test/cpp_headers/endian.o 00:08:34.394 CXX test/cpp_headers/env_dpdk.o 00:08:34.652 CXX test/cpp_headers/env.o 00:08:34.652 CXX test/cpp_headers/event.o 00:08:34.652 CC test/event/reactor/reactor.o 00:08:34.652 CXX test/cpp_headers/fd_group.o 00:08:34.652 CC test/nvme/reset/reset.o 00:08:34.652 LINK aer 00:08:34.652 CXX test/cpp_headers/fd.o 00:08:34.910 LINK pci_ut 00:08:34.910 CXX test/cpp_headers/file.o 00:08:34.910 LINK reactor 00:08:34.910 CXX test/cpp_headers/fsdev.o 00:08:34.910 CC test/nvme/sgl/sgl.o 00:08:34.910 CXX test/cpp_headers/fsdev_module.o 00:08:34.910 CXX test/cpp_headers/ftl.o 00:08:34.910 LINK reset 00:08:35.168 LINK bdevperf 00:08:35.168 CC test/event/reactor_perf/reactor_perf.o 00:08:35.168 CXX test/cpp_headers/fuse_dispatcher.o 00:08:35.168 CC test/nvme/e2edp/nvme_dp.o 00:08:35.168 CC test/nvme/overhead/overhead.o 00:08:35.168 LINK sgl 00:08:35.426 CC test/nvme/err_injection/err_injection.o 00:08:35.426 CC test/accel/dif/dif.o 00:08:35.426 LINK reactor_perf 00:08:35.426 CXX test/cpp_headers/gpt_spec.o 00:08:35.426 CC test/blobfs/mkfs/mkfs.o 00:08:35.426 LINK nvme_dp 00:08:35.426 LINK err_injection 00:08:35.688 CXX test/cpp_headers/hexlify.o 00:08:35.688 LINK memory_ut 00:08:35.688 CC test/event/app_repeat/app_repeat.o 00:08:35.688 CC examples/nvmf/nvmf/nvmf.o 00:08:35.688 LINK overhead 00:08:35.688 LINK mkfs 00:08:35.688 CC test/lvol/esnap/esnap.o 00:08:35.688 CXX test/cpp_headers/histogram_data.o 00:08:35.945 CC test/nvme/startup/startup.o 00:08:35.945 CXX test/cpp_headers/idxd.o 00:08:35.945 CXX test/cpp_headers/idxd_spec.o 00:08:35.945 LINK app_repeat 00:08:35.945 CC test/event/scheduler/scheduler.o 00:08:36.202 CC test/nvme/reserve/reserve.o 00:08:36.202 LINK startup 00:08:36.202 CC test/nvme/simple_copy/simple_copy.o 00:08:36.202 LINK nvmf 00:08:36.202 CC test/nvme/connect_stress/connect_stress.o 00:08:36.202 CXX test/cpp_headers/init.o 00:08:36.202 LINK dif 00:08:36.202 CXX test/cpp_headers/ioat.o 00:08:36.460 CC test/nvme/boot_partition/boot_partition.o 00:08:36.460 LINK reserve 00:08:36.460 LINK connect_stress 00:08:36.460 LINK scheduler 00:08:36.460 LINK simple_copy 00:08:36.460 CC test/nvme/compliance/nvme_compliance.o 00:08:36.460 CC test/nvme/fused_ordering/fused_ordering.o 00:08:36.460 LINK boot_partition 00:08:36.718 CXX test/cpp_headers/ioat_spec.o 00:08:36.718 CC test/nvme/doorbell_aers/doorbell_aers.o 00:08:36.718 CC test/nvme/fdp/fdp.o 00:08:36.718 CC test/nvme/cuse/cuse.o 00:08:36.718 LINK fused_ordering 00:08:36.718 CXX test/cpp_headers/iscsi_spec.o 00:08:36.718 CXX test/cpp_headers/json.o 00:08:36.718 CXX test/cpp_headers/jsonrpc.o 00:08:36.976 CC test/bdev/bdevio/bdevio.o 00:08:36.976 CXX test/cpp_headers/keyring.o 00:08:36.976 LINK nvme_compliance 00:08:36.976 CXX test/cpp_headers/keyring_module.o 00:08:36.976 CXX test/cpp_headers/likely.o 00:08:36.976 LINK doorbell_aers 00:08:36.976 CXX test/cpp_headers/log.o 00:08:37.235 CXX test/cpp_headers/lvol.o 00:08:37.235 CXX test/cpp_headers/md5.o 00:08:37.235 LINK fdp 00:08:37.235 CXX test/cpp_headers/memory.o 00:08:37.235 CXX test/cpp_headers/mmio.o 00:08:37.235 CXX test/cpp_headers/nbd.o 00:08:37.235 CXX test/cpp_headers/net.o 00:08:37.235 CXX test/cpp_headers/notify.o 00:08:37.235 CXX test/cpp_headers/nvme.o 00:08:37.235 CXX test/cpp_headers/nvme_intel.o 00:08:37.492 CXX test/cpp_headers/nvme_ocssd.o 00:08:37.492 LINK bdevio 00:08:37.492 CXX test/cpp_headers/nvme_ocssd_spec.o 00:08:37.492 CXX test/cpp_headers/nvme_spec.o 00:08:37.492 CXX test/cpp_headers/nvme_zns.o 00:08:37.492 CXX test/cpp_headers/nvmf_cmd.o 00:08:37.492 CXX test/cpp_headers/nvmf_fc_spec.o 00:08:37.492 CXX test/cpp_headers/nvmf.o 00:08:37.492 CXX test/cpp_headers/nvmf_spec.o 00:08:37.750 CXX test/cpp_headers/nvmf_transport.o 00:08:37.750 CXX test/cpp_headers/opal.o 00:08:37.750 CXX test/cpp_headers/opal_spec.o 00:08:37.750 CXX test/cpp_headers/pci_ids.o 00:08:37.750 CXX test/cpp_headers/pipe.o 00:08:37.750 CXX test/cpp_headers/queue.o 00:08:37.750 CXX test/cpp_headers/reduce.o 00:08:37.750 CXX test/cpp_headers/rpc.o 00:08:37.750 CXX test/cpp_headers/scheduler.o 00:08:37.750 CXX test/cpp_headers/scsi.o 00:08:38.008 CXX test/cpp_headers/scsi_spec.o 00:08:38.008 CXX test/cpp_headers/sock.o 00:08:38.008 CXX test/cpp_headers/stdinc.o 00:08:38.008 CXX test/cpp_headers/string.o 00:08:38.008 CXX test/cpp_headers/thread.o 00:08:38.008 CXX test/cpp_headers/trace.o 00:08:38.008 CXX test/cpp_headers/trace_parser.o 00:08:38.008 CXX test/cpp_headers/tree.o 00:08:38.266 CXX test/cpp_headers/ublk.o 00:08:38.266 CXX test/cpp_headers/util.o 00:08:38.266 CXX test/cpp_headers/uuid.o 00:08:38.266 CXX test/cpp_headers/version.o 00:08:38.266 CXX test/cpp_headers/vfio_user_pci.o 00:08:38.266 CXX test/cpp_headers/vfio_user_spec.o 00:08:38.266 CXX test/cpp_headers/vhost.o 00:08:38.266 CXX test/cpp_headers/vmd.o 00:08:38.266 CXX test/cpp_headers/xor.o 00:08:38.266 CXX test/cpp_headers/zipf.o 00:08:38.584 LINK cuse 00:08:43.901 LINK esnap 00:08:43.901 00:08:43.901 real 1m39.650s 00:08:43.901 user 9m10.218s 00:08:43.901 sys 1m43.823s 00:08:43.901 09:01:19 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:08:43.901 09:01:19 make -- common/autotest_common.sh@10 -- $ set +x 00:08:43.901 ************************************ 00:08:43.901 END TEST make 00:08:43.901 ************************************ 00:08:43.901 09:01:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:08:43.901 09:01:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:08:43.901 09:01:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:08:43.901 09:01:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:43.901 09:01:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:08:43.901 09:01:20 -- pm/common@44 -- $ pid=5247 00:08:43.901 09:01:20 -- pm/common@50 -- $ kill -TERM 5247 00:08:43.901 09:01:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:43.901 09:01:20 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:08:43.901 09:01:20 -- pm/common@44 -- $ pid=5249 00:08:43.901 09:01:20 -- pm/common@50 -- $ kill -TERM 5249 00:08:43.901 09:01:20 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:08:43.901 09:01:20 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:43.901 09:01:20 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:43.901 09:01:20 -- common/autotest_common.sh@1691 -- # lcov --version 00:08:43.901 09:01:20 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:43.901 09:01:20 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:43.901 09:01:20 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.901 09:01:20 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.901 09:01:20 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.901 09:01:20 -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.901 09:01:20 -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.901 09:01:20 -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.901 09:01:20 -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.901 09:01:20 -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.901 09:01:20 -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.901 09:01:20 -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.901 09:01:20 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.901 09:01:20 -- scripts/common.sh@344 -- # case "$op" in 00:08:43.901 09:01:20 -- scripts/common.sh@345 -- # : 1 00:08:43.901 09:01:20 -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.901 09:01:20 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.901 09:01:20 -- scripts/common.sh@365 -- # decimal 1 00:08:43.901 09:01:20 -- scripts/common.sh@353 -- # local d=1 00:08:43.901 09:01:20 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.901 09:01:20 -- scripts/common.sh@355 -- # echo 1 00:08:43.901 09:01:20 -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.901 09:01:20 -- scripts/common.sh@366 -- # decimal 2 00:08:43.901 09:01:20 -- scripts/common.sh@353 -- # local d=2 00:08:43.901 09:01:20 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.901 09:01:20 -- scripts/common.sh@355 -- # echo 2 00:08:43.901 09:01:20 -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.901 09:01:20 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.901 09:01:20 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.901 09:01:20 -- scripts/common.sh@368 -- # return 0 00:08:43.901 09:01:20 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.901 09:01:20 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:43.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.901 --rc genhtml_branch_coverage=1 00:08:43.901 --rc genhtml_function_coverage=1 00:08:43.901 --rc genhtml_legend=1 00:08:43.901 --rc geninfo_all_blocks=1 00:08:43.901 --rc geninfo_unexecuted_blocks=1 00:08:43.901 00:08:43.901 ' 00:08:43.901 09:01:20 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:43.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.901 --rc genhtml_branch_coverage=1 00:08:43.901 --rc genhtml_function_coverage=1 00:08:43.901 --rc genhtml_legend=1 00:08:43.901 --rc geninfo_all_blocks=1 00:08:43.901 --rc geninfo_unexecuted_blocks=1 00:08:43.901 00:08:43.901 ' 00:08:43.901 09:01:20 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:43.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.901 --rc genhtml_branch_coverage=1 00:08:43.901 --rc genhtml_function_coverage=1 00:08:43.901 --rc genhtml_legend=1 00:08:43.901 --rc geninfo_all_blocks=1 00:08:43.901 --rc geninfo_unexecuted_blocks=1 00:08:43.901 00:08:43.901 ' 00:08:43.901 09:01:20 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:43.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.902 --rc genhtml_branch_coverage=1 00:08:43.902 --rc genhtml_function_coverage=1 00:08:43.902 --rc genhtml_legend=1 00:08:43.902 --rc geninfo_all_blocks=1 00:08:43.902 --rc geninfo_unexecuted_blocks=1 00:08:43.902 00:08:43.902 ' 00:08:43.902 09:01:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.902 09:01:20 -- nvmf/common.sh@7 -- # uname -s 00:08:43.902 09:01:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.902 09:01:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.902 09:01:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.902 09:01:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.902 09:01:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.902 09:01:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.902 09:01:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.902 09:01:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.902 09:01:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.902 09:01:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.902 09:01:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:36a6fb36-c4b3-4edd-b999-d3b5472e1260 00:08:43.902 09:01:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=36a6fb36-c4b3-4edd-b999-d3b5472e1260 00:08:43.902 09:01:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.902 09:01:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.902 09:01:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:43.902 09:01:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.902 09:01:20 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.902 09:01:20 -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.902 09:01:20 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.902 09:01:20 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.902 09:01:20 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.902 09:01:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.902 09:01:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.902 09:01:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.902 09:01:20 -- paths/export.sh@5 -- # export PATH 00:08:43.902 09:01:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.902 09:01:20 -- nvmf/common.sh@51 -- # : 0 00:08:43.902 09:01:20 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.902 09:01:20 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.902 09:01:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.902 09:01:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.902 09:01:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.902 09:01:20 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.902 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.902 09:01:20 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.902 09:01:20 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.902 09:01:20 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.902 09:01:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:08:43.902 09:01:20 -- spdk/autotest.sh@32 -- # uname -s 00:08:43.902 09:01:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:08:43.902 09:01:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:08:43.902 09:01:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:43.902 09:01:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:08:43.902 09:01:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:08:43.902 09:01:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:08:43.902 09:01:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:08:43.902 09:01:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:08:43.902 09:01:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:08:43.902 09:01:20 -- spdk/autotest.sh@48 -- # udevadm_pid=54308 00:08:43.902 09:01:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:08:43.902 09:01:20 -- pm/common@17 -- # local monitor 00:08:43.902 09:01:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:43.902 09:01:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:08:43.902 09:01:20 -- pm/common@25 -- # sleep 1 00:08:43.902 09:01:20 -- pm/common@21 -- # date +%s 00:08:43.902 09:01:20 -- pm/common@21 -- # date +%s 00:08:43.902 09:01:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730883680 00:08:43.902 09:01:20 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730883680 00:08:43.902 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730883680_collect-cpu-load.pm.log 00:08:43.902 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730883680_collect-vmstat.pm.log 00:08:44.837 09:01:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:08:44.837 09:01:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:08:44.837 09:01:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:44.837 09:01:21 -- common/autotest_common.sh@10 -- # set +x 00:08:44.837 09:01:21 -- spdk/autotest.sh@59 -- # create_test_list 00:08:44.837 09:01:21 -- common/autotest_common.sh@750 -- # xtrace_disable 00:08:44.837 09:01:21 -- common/autotest_common.sh@10 -- # set +x 00:08:44.837 09:01:21 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:08:44.837 09:01:21 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:08:44.837 09:01:21 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:08:44.837 09:01:21 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:08:44.837 09:01:21 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:08:44.837 09:01:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:08:44.837 09:01:21 -- common/autotest_common.sh@1455 -- # uname 00:08:44.837 09:01:21 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:08:44.837 09:01:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:08:44.837 09:01:21 -- common/autotest_common.sh@1475 -- # uname 00:08:44.837 09:01:21 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:08:44.837 09:01:21 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:08:44.837 09:01:21 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:08:45.095 lcov: LCOV version 1.15 00:08:45.095 09:01:21 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:09:03.185 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:09:03.185 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:09:21.267 09:01:54 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:09:21.267 09:01:54 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:21.267 09:01:54 -- common/autotest_common.sh@10 -- # set +x 00:09:21.267 09:01:54 -- spdk/autotest.sh@78 -- # rm -f 00:09:21.267 09:01:54 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:21.267 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:21.267 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:09:21.267 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:09:21.267 09:01:55 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:09:21.267 09:01:55 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:21.267 09:01:55 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:21.267 09:01:55 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:21.267 09:01:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:21.267 09:01:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:21.267 09:01:55 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:21.267 09:01:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:21.267 09:01:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:21.267 09:01:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:21.267 09:01:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:09:21.267 09:01:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:09:21.267 09:01:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:21.267 09:01:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:21.267 09:01:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:21.267 09:01:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:09:21.267 09:01:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:09:21.267 09:01:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:09:21.267 09:01:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:21.267 09:01:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:21.267 09:01:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:09:21.267 09:01:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:09:21.267 09:01:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:09:21.267 09:01:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:21.267 09:01:55 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:09:21.267 09:01:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:21.268 09:01:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:21.268 09:01:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:09:21.268 09:01:55 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:09:21.268 09:01:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:09:21.268 No valid GPT data, bailing 00:09:21.268 09:01:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:09:21.268 09:01:55 -- scripts/common.sh@394 -- # pt= 00:09:21.268 09:01:55 -- scripts/common.sh@395 -- # return 1 00:09:21.268 09:01:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:09:21.268 1+0 records in 00:09:21.268 1+0 records out 00:09:21.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444919 s, 236 MB/s 00:09:21.268 09:01:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:21.268 09:01:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:21.268 09:01:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:09:21.268 09:01:55 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:09:21.268 09:01:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:09:21.268 No valid GPT data, bailing 00:09:21.268 09:01:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:09:21.268 09:01:55 -- scripts/common.sh@394 -- # pt= 00:09:21.268 09:01:55 -- scripts/common.sh@395 -- # return 1 00:09:21.268 09:01:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:09:21.268 1+0 records in 00:09:21.268 1+0 records out 00:09:21.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389893 s, 269 MB/s 00:09:21.268 09:01:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:21.268 09:01:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:21.268 09:01:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:09:21.268 09:01:55 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:09:21.268 09:01:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:09:21.268 No valid GPT data, bailing 00:09:21.268 09:01:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:09:21.268 09:01:55 -- scripts/common.sh@394 -- # pt= 00:09:21.268 09:01:55 -- scripts/common.sh@395 -- # return 1 00:09:21.268 09:01:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:09:21.268 1+0 records in 00:09:21.268 1+0 records out 00:09:21.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480391 s, 218 MB/s 00:09:21.268 09:01:55 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:09:21.268 09:01:55 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:09:21.268 09:01:55 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:09:21.268 09:01:55 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:09:21.268 09:01:55 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:09:21.268 No valid GPT data, bailing 00:09:21.268 09:01:55 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:09:21.268 09:01:55 -- scripts/common.sh@394 -- # pt= 00:09:21.268 09:01:55 -- scripts/common.sh@395 -- # return 1 00:09:21.268 09:01:55 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:09:21.268 1+0 records in 00:09:21.268 1+0 records out 00:09:21.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00471583 s, 222 MB/s 00:09:21.268 09:01:55 -- spdk/autotest.sh@105 -- # sync 00:09:21.268 09:01:56 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:09:21.268 09:01:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:09:21.268 09:01:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:09:21.526 09:01:58 -- spdk/autotest.sh@111 -- # uname -s 00:09:21.526 09:01:58 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:09:21.526 09:01:58 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:09:21.526 09:01:58 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:22.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.352 Hugepages 00:09:22.352 node hugesize free / total 00:09:22.352 node0 1048576kB 0 / 0 00:09:22.352 node0 2048kB 0 / 0 00:09:22.352 00:09:22.352 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:22.352 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:22.352 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:22.352 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:09:22.610 09:01:58 -- spdk/autotest.sh@117 -- # uname -s 00:09:22.610 09:01:58 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:09:22.610 09:01:58 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:09:22.610 09:01:58 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:23.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:23.177 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:23.435 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:23.435 09:01:59 -- common/autotest_common.sh@1515 -- # sleep 1 00:09:24.371 09:02:00 -- common/autotest_common.sh@1516 -- # bdfs=() 00:09:24.371 09:02:00 -- common/autotest_common.sh@1516 -- # local bdfs 00:09:24.371 09:02:00 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:09:24.371 09:02:00 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:09:24.371 09:02:00 -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:24.371 09:02:00 -- common/autotest_common.sh@1496 -- # local bdfs 00:09:24.371 09:02:00 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:24.371 09:02:00 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:24.371 09:02:00 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:24.371 09:02:00 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:09:24.371 09:02:00 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:24.371 09:02:00 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.939 Waiting for block devices as requested 00:09:24.939 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.939 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.939 09:02:01 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:09:24.939 09:02:01 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:09:24.939 09:02:01 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:24.939 09:02:01 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:09:25.197 09:02:01 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:25.197 09:02:01 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:09:25.197 09:02:01 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:09:25.197 09:02:01 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:09:25.197 09:02:01 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:09:25.197 09:02:01 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:09:25.197 09:02:01 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:09:25.197 09:02:01 -- common/autotest_common.sh@1529 -- # grep oacs 00:09:25.197 09:02:01 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:09:25.197 09:02:01 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:09:25.197 09:02:01 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:09:25.197 09:02:01 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:09:25.197 09:02:01 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:09:25.197 09:02:01 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:09:25.197 09:02:01 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:09:25.197 09:02:01 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:09:25.197 09:02:01 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:09:25.197 09:02:01 -- common/autotest_common.sh@1541 -- # continue 00:09:25.197 09:02:01 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:09:25.197 09:02:01 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:09:25.197 09:02:01 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:09:25.197 09:02:01 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:09:25.197 09:02:01 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:25.197 09:02:01 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:09:25.197 09:02:01 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:09:25.197 09:02:01 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:09:25.198 09:02:01 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:09:25.198 09:02:01 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:09:25.198 09:02:01 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:09:25.198 09:02:01 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:09:25.198 09:02:01 -- common/autotest_common.sh@1529 -- # grep oacs 00:09:25.198 09:02:01 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:09:25.198 09:02:01 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:09:25.198 09:02:01 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:09:25.198 09:02:01 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:09:25.198 09:02:01 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:09:25.198 09:02:01 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:09:25.198 09:02:01 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:09:25.198 09:02:01 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:09:25.198 09:02:01 -- common/autotest_common.sh@1541 -- # continue 00:09:25.198 09:02:01 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:09:25.198 09:02:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.198 09:02:01 -- common/autotest_common.sh@10 -- # set +x 00:09:25.198 09:02:01 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:09:25.198 09:02:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:25.198 09:02:01 -- common/autotest_common.sh@10 -- # set +x 00:09:25.198 09:02:01 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:25.789 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:26.047 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.047 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:26.047 09:02:02 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:09:26.047 09:02:02 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:26.047 09:02:02 -- common/autotest_common.sh@10 -- # set +x 00:09:26.047 09:02:02 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:09:26.047 09:02:02 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:09:26.047 09:02:02 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:09:26.047 09:02:02 -- common/autotest_common.sh@1561 -- # bdfs=() 00:09:26.047 09:02:02 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:09:26.047 09:02:02 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:09:26.047 09:02:02 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:09:26.047 09:02:02 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:09:26.047 09:02:02 -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:26.047 09:02:02 -- common/autotest_common.sh@1496 -- # local bdfs 00:09:26.047 09:02:02 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:26.047 09:02:02 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:26.047 09:02:02 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:26.047 09:02:02 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:09:26.047 09:02:02 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:26.047 09:02:02 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:09:26.047 09:02:02 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:26.047 09:02:02 -- common/autotest_common.sh@1564 -- # device=0x0010 00:09:26.048 09:02:02 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:26.048 09:02:02 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:09:26.048 09:02:02 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:26.048 09:02:02 -- common/autotest_common.sh@1564 -- # device=0x0010 00:09:26.048 09:02:02 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:26.048 09:02:02 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:09:26.048 09:02:02 -- common/autotest_common.sh@1570 -- # return 0 00:09:26.048 09:02:02 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:09:26.048 09:02:02 -- common/autotest_common.sh@1578 -- # return 0 00:09:26.048 09:02:02 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:09:26.048 09:02:02 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:09:26.048 09:02:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:26.048 09:02:02 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:09:26.048 09:02:02 -- spdk/autotest.sh@149 -- # timing_enter lib 00:09:26.048 09:02:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:26.048 09:02:02 -- common/autotest_common.sh@10 -- # set +x 00:09:26.048 09:02:02 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:09:26.307 09:02:02 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:26.307 09:02:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:26.307 09:02:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.307 09:02:02 -- common/autotest_common.sh@10 -- # set +x 00:09:26.307 ************************************ 00:09:26.307 START TEST env 00:09:26.307 ************************************ 00:09:26.307 09:02:02 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:26.307 * Looking for test storage... 00:09:26.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:26.307 09:02:02 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:26.307 09:02:02 env -- common/autotest_common.sh@1691 -- # lcov --version 00:09:26.307 09:02:02 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:26.307 09:02:02 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:26.307 09:02:02 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.307 09:02:02 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.307 09:02:02 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.307 09:02:02 env -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.307 09:02:02 env -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.307 09:02:02 env -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.307 09:02:02 env -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.307 09:02:02 env -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.307 09:02:02 env -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.307 09:02:02 env -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.307 09:02:02 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.307 09:02:02 env -- scripts/common.sh@344 -- # case "$op" in 00:09:26.307 09:02:02 env -- scripts/common.sh@345 -- # : 1 00:09:26.307 09:02:02 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.307 09:02:02 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.307 09:02:02 env -- scripts/common.sh@365 -- # decimal 1 00:09:26.307 09:02:02 env -- scripts/common.sh@353 -- # local d=1 00:09:26.307 09:02:02 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.307 09:02:02 env -- scripts/common.sh@355 -- # echo 1 00:09:26.307 09:02:02 env -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.307 09:02:02 env -- scripts/common.sh@366 -- # decimal 2 00:09:26.307 09:02:02 env -- scripts/common.sh@353 -- # local d=2 00:09:26.307 09:02:02 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.307 09:02:02 env -- scripts/common.sh@355 -- # echo 2 00:09:26.307 09:02:02 env -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.307 09:02:02 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.307 09:02:02 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.307 09:02:02 env -- scripts/common.sh@368 -- # return 0 00:09:26.307 09:02:02 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.307 09:02:02 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:26.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.307 --rc genhtml_branch_coverage=1 00:09:26.307 --rc genhtml_function_coverage=1 00:09:26.307 --rc genhtml_legend=1 00:09:26.307 --rc geninfo_all_blocks=1 00:09:26.307 --rc geninfo_unexecuted_blocks=1 00:09:26.307 00:09:26.307 ' 00:09:26.307 09:02:02 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:26.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.307 --rc genhtml_branch_coverage=1 00:09:26.307 --rc genhtml_function_coverage=1 00:09:26.307 --rc genhtml_legend=1 00:09:26.307 --rc geninfo_all_blocks=1 00:09:26.307 --rc geninfo_unexecuted_blocks=1 00:09:26.307 00:09:26.307 ' 00:09:26.307 09:02:02 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:26.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.307 --rc genhtml_branch_coverage=1 00:09:26.307 --rc genhtml_function_coverage=1 00:09:26.307 --rc genhtml_legend=1 00:09:26.307 --rc geninfo_all_blocks=1 00:09:26.307 --rc geninfo_unexecuted_blocks=1 00:09:26.307 00:09:26.307 ' 00:09:26.307 09:02:02 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:26.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.307 --rc genhtml_branch_coverage=1 00:09:26.307 --rc genhtml_function_coverage=1 00:09:26.307 --rc genhtml_legend=1 00:09:26.307 --rc geninfo_all_blocks=1 00:09:26.307 --rc geninfo_unexecuted_blocks=1 00:09:26.307 00:09:26.307 ' 00:09:26.307 09:02:02 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:26.307 09:02:02 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:26.307 09:02:02 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.307 09:02:02 env -- common/autotest_common.sh@10 -- # set +x 00:09:26.307 ************************************ 00:09:26.307 START TEST env_memory 00:09:26.307 ************************************ 00:09:26.307 09:02:02 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:26.307 00:09:26.307 00:09:26.307 CUnit - A unit testing framework for C - Version 2.1-3 00:09:26.307 http://cunit.sourceforge.net/ 00:09:26.307 00:09:26.307 00:09:26.307 Suite: memory 00:09:26.566 Test: alloc and free memory map ...[2024-11-06 09:02:02.883449] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:26.566 passed 00:09:26.566 Test: mem map translation ...[2024-11-06 09:02:02.945002] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:26.566 [2024-11-06 09:02:02.945378] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:26.566 [2024-11-06 09:02:02.945658] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:26.566 [2024-11-06 09:02:02.945901] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:26.566 passed 00:09:26.566 Test: mem map registration ...[2024-11-06 09:02:03.052097] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:09:26.566 [2024-11-06 09:02:03.052410] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:09:26.566 passed 00:09:26.825 Test: mem map adjacent registrations ...passed 00:09:26.825 00:09:26.825 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.825 suites 1 1 n/a 0 0 00:09:26.825 tests 4 4 4 0 0 00:09:26.825 asserts 152 152 152 0 n/a 00:09:26.825 00:09:26.825 Elapsed time = 0.352 seconds 00:09:26.825 00:09:26.825 real 0m0.397s 00:09:26.825 user 0m0.362s 00:09:26.825 sys 0m0.025s 00:09:26.825 09:02:03 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.825 09:02:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:26.825 ************************************ 00:09:26.825 END TEST env_memory 00:09:26.825 ************************************ 00:09:26.825 09:02:03 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:26.825 09:02:03 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:26.825 09:02:03 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.825 09:02:03 env -- common/autotest_common.sh@10 -- # set +x 00:09:26.825 ************************************ 00:09:26.825 START TEST env_vtophys 00:09:26.825 ************************************ 00:09:26.825 09:02:03 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:26.825 EAL: lib.eal log level changed from notice to debug 00:09:26.825 EAL: Detected lcore 0 as core 0 on socket 0 00:09:26.825 EAL: Detected lcore 1 as core 0 on socket 0 00:09:26.825 EAL: Detected lcore 2 as core 0 on socket 0 00:09:26.825 EAL: Detected lcore 3 as core 0 on socket 0 00:09:26.825 EAL: Detected lcore 4 as core 0 on socket 0 00:09:26.825 EAL: Detected lcore 5 as core 0 on socket 0 00:09:26.825 EAL: Detected lcore 6 as core 0 on socket 0 00:09:26.825 EAL: Detected lcore 7 as core 0 on socket 0 00:09:26.825 EAL: Detected lcore 8 as core 0 on socket 0 00:09:26.825 EAL: Detected lcore 9 as core 0 on socket 0 00:09:26.825 EAL: Maximum logical cores by configuration: 128 00:09:26.825 EAL: Detected CPU lcores: 10 00:09:26.825 EAL: Detected NUMA nodes: 1 00:09:26.825 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:26.825 EAL: Detected shared linkage of DPDK 00:09:26.825 EAL: No shared files mode enabled, IPC will be disabled 00:09:27.083 EAL: Selected IOVA mode 'PA' 00:09:27.083 EAL: Probing VFIO support... 00:09:27.083 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:27.083 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:27.083 EAL: Ask a virtual area of 0x2e000 bytes 00:09:27.084 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:27.084 EAL: Setting up physically contiguous memory... 00:09:27.084 EAL: Setting maximum number of open files to 524288 00:09:27.084 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:27.084 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:27.084 EAL: Ask a virtual area of 0x61000 bytes 00:09:27.084 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:27.084 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:27.084 EAL: Ask a virtual area of 0x400000000 bytes 00:09:27.084 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:27.084 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:27.084 EAL: Ask a virtual area of 0x61000 bytes 00:09:27.084 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:27.084 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:27.084 EAL: Ask a virtual area of 0x400000000 bytes 00:09:27.084 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:27.084 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:27.084 EAL: Ask a virtual area of 0x61000 bytes 00:09:27.084 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:27.084 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:27.084 EAL: Ask a virtual area of 0x400000000 bytes 00:09:27.084 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:27.084 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:27.084 EAL: Ask a virtual area of 0x61000 bytes 00:09:27.084 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:27.084 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:27.084 EAL: Ask a virtual area of 0x400000000 bytes 00:09:27.084 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:27.084 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:27.084 EAL: Hugepages will be freed exactly as allocated. 00:09:27.084 EAL: No shared files mode enabled, IPC is disabled 00:09:27.084 EAL: No shared files mode enabled, IPC is disabled 00:09:27.084 EAL: TSC frequency is ~2200000 KHz 00:09:27.084 EAL: Main lcore 0 is ready (tid=7f3a8340da40;cpuset=[0]) 00:09:27.084 EAL: Trying to obtain current memory policy. 00:09:27.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:27.084 EAL: Restoring previous memory policy: 0 00:09:27.084 EAL: request: mp_malloc_sync 00:09:27.084 EAL: No shared files mode enabled, IPC is disabled 00:09:27.084 EAL: Heap on socket 0 was expanded by 2MB 00:09:27.084 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:27.084 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:27.084 EAL: Mem event callback 'spdk:(nil)' registered 00:09:27.084 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:27.084 00:09:27.084 00:09:27.084 CUnit - A unit testing framework for C - Version 2.1-3 00:09:27.084 http://cunit.sourceforge.net/ 00:09:27.084 00:09:27.084 00:09:27.084 Suite: components_suite 00:09:27.653 Test: vtophys_malloc_test ...passed 00:09:27.653 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:27.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:27.653 EAL: Restoring previous memory policy: 4 00:09:27.653 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.653 EAL: request: mp_malloc_sync 00:09:27.653 EAL: No shared files mode enabled, IPC is disabled 00:09:27.653 EAL: Heap on socket 0 was expanded by 4MB 00:09:27.653 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.653 EAL: request: mp_malloc_sync 00:09:27.653 EAL: No shared files mode enabled, IPC is disabled 00:09:27.653 EAL: Heap on socket 0 was shrunk by 4MB 00:09:27.653 EAL: Trying to obtain current memory policy. 00:09:27.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:27.653 EAL: Restoring previous memory policy: 4 00:09:27.653 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.653 EAL: request: mp_malloc_sync 00:09:27.653 EAL: No shared files mode enabled, IPC is disabled 00:09:27.653 EAL: Heap on socket 0 was expanded by 6MB 00:09:27.653 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.653 EAL: request: mp_malloc_sync 00:09:27.653 EAL: No shared files mode enabled, IPC is disabled 00:09:27.653 EAL: Heap on socket 0 was shrunk by 6MB 00:09:27.653 EAL: Trying to obtain current memory policy. 00:09:27.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:27.653 EAL: Restoring previous memory policy: 4 00:09:27.653 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.653 EAL: request: mp_malloc_sync 00:09:27.653 EAL: No shared files mode enabled, IPC is disabled 00:09:27.653 EAL: Heap on socket 0 was expanded by 10MB 00:09:27.653 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.653 EAL: request: mp_malloc_sync 00:09:27.653 EAL: No shared files mode enabled, IPC is disabled 00:09:27.653 EAL: Heap on socket 0 was shrunk by 10MB 00:09:27.653 EAL: Trying to obtain current memory policy. 00:09:27.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:27.653 EAL: Restoring previous memory policy: 4 00:09:27.653 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.653 EAL: request: mp_malloc_sync 00:09:27.653 EAL: No shared files mode enabled, IPC is disabled 00:09:27.653 EAL: Heap on socket 0 was expanded by 18MB 00:09:27.653 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.653 EAL: request: mp_malloc_sync 00:09:27.653 EAL: No shared files mode enabled, IPC is disabled 00:09:27.653 EAL: Heap on socket 0 was shrunk by 18MB 00:09:27.653 EAL: Trying to obtain current memory policy. 00:09:27.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:27.653 EAL: Restoring previous memory policy: 4 00:09:27.653 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.653 EAL: request: mp_malloc_sync 00:09:27.653 EAL: No shared files mode enabled, IPC is disabled 00:09:27.653 EAL: Heap on socket 0 was expanded by 34MB 00:09:27.653 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.653 EAL: request: mp_malloc_sync 00:09:27.653 EAL: No shared files mode enabled, IPC is disabled 00:09:27.653 EAL: Heap on socket 0 was shrunk by 34MB 00:09:27.653 EAL: Trying to obtain current memory policy. 00:09:27.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:27.653 EAL: Restoring previous memory policy: 4 00:09:27.653 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.653 EAL: request: mp_malloc_sync 00:09:27.653 EAL: No shared files mode enabled, IPC is disabled 00:09:27.653 EAL: Heap on socket 0 was expanded by 66MB 00:09:27.912 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.912 EAL: request: mp_malloc_sync 00:09:27.912 EAL: No shared files mode enabled, IPC is disabled 00:09:27.912 EAL: Heap on socket 0 was shrunk by 66MB 00:09:27.912 EAL: Trying to obtain current memory policy. 00:09:27.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:27.912 EAL: Restoring previous memory policy: 4 00:09:27.912 EAL: Calling mem event callback 'spdk:(nil)' 00:09:27.912 EAL: request: mp_malloc_sync 00:09:27.912 EAL: No shared files mode enabled, IPC is disabled 00:09:27.912 EAL: Heap on socket 0 was expanded by 130MB 00:09:28.170 EAL: Calling mem event callback 'spdk:(nil)' 00:09:28.170 EAL: request: mp_malloc_sync 00:09:28.170 EAL: No shared files mode enabled, IPC is disabled 00:09:28.170 EAL: Heap on socket 0 was shrunk by 130MB 00:09:28.428 EAL: Trying to obtain current memory policy. 00:09:28.429 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:28.429 EAL: Restoring previous memory policy: 4 00:09:28.429 EAL: Calling mem event callback 'spdk:(nil)' 00:09:28.429 EAL: request: mp_malloc_sync 00:09:28.429 EAL: No shared files mode enabled, IPC is disabled 00:09:28.429 EAL: Heap on socket 0 was expanded by 258MB 00:09:29.004 EAL: Calling mem event callback 'spdk:(nil)' 00:09:29.004 EAL: request: mp_malloc_sync 00:09:29.004 EAL: No shared files mode enabled, IPC is disabled 00:09:29.004 EAL: Heap on socket 0 was shrunk by 258MB 00:09:29.263 EAL: Trying to obtain current memory policy. 00:09:29.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:29.522 EAL: Restoring previous memory policy: 4 00:09:29.522 EAL: Calling mem event callback 'spdk:(nil)' 00:09:29.522 EAL: request: mp_malloc_sync 00:09:29.522 EAL: No shared files mode enabled, IPC is disabled 00:09:29.522 EAL: Heap on socket 0 was expanded by 514MB 00:09:30.465 EAL: Calling mem event callback 'spdk:(nil)' 00:09:30.465 EAL: request: mp_malloc_sync 00:09:30.465 EAL: No shared files mode enabled, IPC is disabled 00:09:30.465 EAL: Heap on socket 0 was shrunk by 514MB 00:09:31.038 EAL: Trying to obtain current memory policy. 00:09:31.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:31.296 EAL: Restoring previous memory policy: 4 00:09:31.296 EAL: Calling mem event callback 'spdk:(nil)' 00:09:31.296 EAL: request: mp_malloc_sync 00:09:31.296 EAL: No shared files mode enabled, IPC is disabled 00:09:31.296 EAL: Heap on socket 0 was expanded by 1026MB 00:09:33.206 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.206 EAL: request: mp_malloc_sync 00:09:33.206 EAL: No shared files mode enabled, IPC is disabled 00:09:33.206 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:34.582 passed 00:09:34.582 00:09:34.582 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.582 suites 1 1 n/a 0 0 00:09:34.582 tests 2 2 2 0 0 00:09:34.582 asserts 5761 5761 5761 0 n/a 00:09:34.582 00:09:34.582 Elapsed time = 7.381 seconds 00:09:34.582 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.582 EAL: request: mp_malloc_sync 00:09:34.582 EAL: No shared files mode enabled, IPC is disabled 00:09:34.582 EAL: Heap on socket 0 was shrunk by 2MB 00:09:34.582 EAL: No shared files mode enabled, IPC is disabled 00:09:34.582 EAL: No shared files mode enabled, IPC is disabled 00:09:34.582 EAL: No shared files mode enabled, IPC is disabled 00:09:34.582 00:09:34.582 real 0m7.730s 00:09:34.582 user 0m6.528s 00:09:34.582 sys 0m1.039s 00:09:34.582 ************************************ 00:09:34.582 END TEST env_vtophys 00:09:34.582 ************************************ 00:09:34.582 09:02:10 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:34.582 09:02:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:34.582 09:02:11 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:34.582 09:02:11 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:34.582 09:02:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:34.583 09:02:11 env -- common/autotest_common.sh@10 -- # set +x 00:09:34.583 ************************************ 00:09:34.583 START TEST env_pci 00:09:34.583 ************************************ 00:09:34.583 09:02:11 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:34.583 00:09:34.583 00:09:34.583 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.583 http://cunit.sourceforge.net/ 00:09:34.583 00:09:34.583 00:09:34.583 Suite: pci 00:09:34.583 Test: pci_hook ...[2024-11-06 09:02:11.079384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56641 has claimed it 00:09:34.583 passed 00:09:34.583 00:09:34.583 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.583 suites 1 1 n/a 0 0 00:09:34.583 tests 1 1 1 0 0 00:09:34.583 asserts 25 25 25 0 n/a 00:09:34.583 00:09:34.583 Elapsed time = 0.007 secondsEAL: Cannot find device (10000:00:01.0) 00:09:34.583 EAL: Failed to attach device on primary process 00:09:34.583 00:09:34.840 00:09:34.840 real 0m0.083s 00:09:34.840 user 0m0.042s 00:09:34.840 sys 0m0.040s 00:09:34.840 ************************************ 00:09:34.840 END TEST env_pci 00:09:34.840 09:02:11 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:34.840 09:02:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:34.840 ************************************ 00:09:34.840 09:02:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:34.840 09:02:11 env -- env/env.sh@15 -- # uname 00:09:34.840 09:02:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:34.840 09:02:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:34.840 09:02:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:34.840 09:02:11 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:34.840 09:02:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:34.841 09:02:11 env -- common/autotest_common.sh@10 -- # set +x 00:09:34.841 ************************************ 00:09:34.841 START TEST env_dpdk_post_init 00:09:34.841 ************************************ 00:09:34.841 09:02:11 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:34.841 EAL: Detected CPU lcores: 10 00:09:34.841 EAL: Detected NUMA nodes: 1 00:09:34.841 EAL: Detected shared linkage of DPDK 00:09:34.841 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:34.841 EAL: Selected IOVA mode 'PA' 00:09:35.098 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:35.098 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:35.098 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:35.098 Starting DPDK initialization... 00:09:35.098 Starting SPDK post initialization... 00:09:35.098 SPDK NVMe probe 00:09:35.098 Attaching to 0000:00:10.0 00:09:35.098 Attaching to 0000:00:11.0 00:09:35.098 Attached to 0000:00:10.0 00:09:35.098 Attached to 0000:00:11.0 00:09:35.098 Cleaning up... 00:09:35.098 00:09:35.098 real 0m0.302s 00:09:35.098 user 0m0.099s 00:09:35.098 sys 0m0.102s 00:09:35.098 ************************************ 00:09:35.098 END TEST env_dpdk_post_init 00:09:35.098 ************************************ 00:09:35.099 09:02:11 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:35.099 09:02:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:35.099 09:02:11 env -- env/env.sh@26 -- # uname 00:09:35.099 09:02:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:35.099 09:02:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:35.099 09:02:11 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:35.099 09:02:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:35.099 09:02:11 env -- common/autotest_common.sh@10 -- # set +x 00:09:35.099 ************************************ 00:09:35.099 START TEST env_mem_callbacks 00:09:35.099 ************************************ 00:09:35.099 09:02:11 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:35.099 EAL: Detected CPU lcores: 10 00:09:35.099 EAL: Detected NUMA nodes: 1 00:09:35.099 EAL: Detected shared linkage of DPDK 00:09:35.099 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:35.099 EAL: Selected IOVA mode 'PA' 00:09:35.356 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:35.356 00:09:35.356 00:09:35.356 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.356 http://cunit.sourceforge.net/ 00:09:35.356 00:09:35.356 00:09:35.356 Suite: memory 00:09:35.356 Test: test ... 00:09:35.356 register 0x200000200000 2097152 00:09:35.356 malloc 3145728 00:09:35.356 register 0x200000400000 4194304 00:09:35.356 buf 0x2000004fffc0 len 3145728 PASSED 00:09:35.356 malloc 64 00:09:35.356 buf 0x2000004ffec0 len 64 PASSED 00:09:35.356 malloc 4194304 00:09:35.356 register 0x200000800000 6291456 00:09:35.356 buf 0x2000009fffc0 len 4194304 PASSED 00:09:35.356 free 0x2000004fffc0 3145728 00:09:35.356 free 0x2000004ffec0 64 00:09:35.356 unregister 0x200000400000 4194304 PASSED 00:09:35.356 free 0x2000009fffc0 4194304 00:09:35.356 unregister 0x200000800000 6291456 PASSED 00:09:35.356 malloc 8388608 00:09:35.356 register 0x200000400000 10485760 00:09:35.356 buf 0x2000005fffc0 len 8388608 PASSED 00:09:35.356 free 0x2000005fffc0 8388608 00:09:35.356 unregister 0x200000400000 10485760 PASSED 00:09:35.356 passed 00:09:35.356 00:09:35.356 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.356 suites 1 1 n/a 0 0 00:09:35.356 tests 1 1 1 0 0 00:09:35.356 asserts 15 15 15 0 n/a 00:09:35.356 00:09:35.356 Elapsed time = 0.060 seconds 00:09:35.356 00:09:35.356 real 0m0.271s 00:09:35.356 user 0m0.093s 00:09:35.356 sys 0m0.073s 00:09:35.356 09:02:11 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:35.356 09:02:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:35.356 ************************************ 00:09:35.356 END TEST env_mem_callbacks 00:09:35.357 ************************************ 00:09:35.357 00:09:35.357 real 0m9.253s 00:09:35.357 user 0m7.326s 00:09:35.357 sys 0m1.528s 00:09:35.357 09:02:11 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:35.357 ************************************ 00:09:35.357 END TEST env 00:09:35.357 ************************************ 00:09:35.357 09:02:11 env -- common/autotest_common.sh@10 -- # set +x 00:09:35.357 09:02:11 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:35.357 09:02:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:35.357 09:02:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:35.357 09:02:11 -- common/autotest_common.sh@10 -- # set +x 00:09:35.614 ************************************ 00:09:35.614 START TEST rpc 00:09:35.614 ************************************ 00:09:35.614 09:02:11 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:35.614 * Looking for test storage... 00:09:35.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:35.614 09:02:11 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:35.614 09:02:11 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:35.614 09:02:11 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:35.614 09:02:12 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:35.614 09:02:12 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.614 09:02:12 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.614 09:02:12 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.614 09:02:12 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.614 09:02:12 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.614 09:02:12 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.614 09:02:12 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.614 09:02:12 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.614 09:02:12 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.614 09:02:12 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.614 09:02:12 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.614 09:02:12 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:35.614 09:02:12 rpc -- scripts/common.sh@345 -- # : 1 00:09:35.614 09:02:12 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.614 09:02:12 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.614 09:02:12 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:35.614 09:02:12 rpc -- scripts/common.sh@353 -- # local d=1 00:09:35.614 09:02:12 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.614 09:02:12 rpc -- scripts/common.sh@355 -- # echo 1 00:09:35.614 09:02:12 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.614 09:02:12 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:35.614 09:02:12 rpc -- scripts/common.sh@353 -- # local d=2 00:09:35.615 09:02:12 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.615 09:02:12 rpc -- scripts/common.sh@355 -- # echo 2 00:09:35.615 09:02:12 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.615 09:02:12 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.615 09:02:12 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.615 09:02:12 rpc -- scripts/common.sh@368 -- # return 0 00:09:35.615 09:02:12 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.615 09:02:12 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:35.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.615 --rc genhtml_branch_coverage=1 00:09:35.615 --rc genhtml_function_coverage=1 00:09:35.615 --rc genhtml_legend=1 00:09:35.615 --rc geninfo_all_blocks=1 00:09:35.615 --rc geninfo_unexecuted_blocks=1 00:09:35.615 00:09:35.615 ' 00:09:35.615 09:02:12 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:35.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.615 --rc genhtml_branch_coverage=1 00:09:35.615 --rc genhtml_function_coverage=1 00:09:35.615 --rc genhtml_legend=1 00:09:35.615 --rc geninfo_all_blocks=1 00:09:35.615 --rc geninfo_unexecuted_blocks=1 00:09:35.615 00:09:35.615 ' 00:09:35.615 09:02:12 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:35.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.615 --rc genhtml_branch_coverage=1 00:09:35.615 --rc genhtml_function_coverage=1 00:09:35.615 --rc genhtml_legend=1 00:09:35.615 --rc geninfo_all_blocks=1 00:09:35.615 --rc geninfo_unexecuted_blocks=1 00:09:35.615 00:09:35.615 ' 00:09:35.615 09:02:12 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:35.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.615 --rc genhtml_branch_coverage=1 00:09:35.615 --rc genhtml_function_coverage=1 00:09:35.615 --rc genhtml_legend=1 00:09:35.615 --rc geninfo_all_blocks=1 00:09:35.615 --rc geninfo_unexecuted_blocks=1 00:09:35.615 00:09:35.615 ' 00:09:35.615 09:02:12 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56768 00:09:35.615 09:02:12 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:35.615 09:02:12 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:35.615 09:02:12 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56768 00:09:35.615 09:02:12 rpc -- common/autotest_common.sh@833 -- # '[' -z 56768 ']' 00:09:35.615 09:02:12 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.615 09:02:12 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:35.615 09:02:12 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.615 09:02:12 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:35.615 09:02:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.873 [2024-11-06 09:02:12.217809] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:09:35.873 [2024-11-06 09:02:12.218218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56768 ] 00:09:35.873 [2024-11-06 09:02:12.403271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.131 [2024-11-06 09:02:12.535158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:36.131 [2024-11-06 09:02:12.535492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56768' to capture a snapshot of events at runtime. 00:09:36.131 [2024-11-06 09:02:12.535635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.131 [2024-11-06 09:02:12.535778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.131 [2024-11-06 09:02:12.535856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56768 for offline analysis/debug. 00:09:36.131 [2024-11-06 09:02:12.537303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.067 09:02:13 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:37.067 09:02:13 rpc -- common/autotest_common.sh@866 -- # return 0 00:09:37.067 09:02:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:37.067 09:02:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:37.067 09:02:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:37.067 09:02:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:37.067 09:02:13 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:37.067 09:02:13 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:37.067 09:02:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.067 ************************************ 00:09:37.067 START TEST rpc_integrity 00:09:37.067 ************************************ 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:09:37.067 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.067 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:37.067 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:37.067 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:37.067 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.067 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:37.067 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.067 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:37.067 { 00:09:37.067 "name": "Malloc0", 00:09:37.067 "aliases": [ 00:09:37.067 "0d169c14-9aa0-464d-8395-a6746e7577e5" 00:09:37.067 ], 00:09:37.067 "product_name": "Malloc disk", 00:09:37.067 "block_size": 512, 00:09:37.067 "num_blocks": 16384, 00:09:37.067 "uuid": "0d169c14-9aa0-464d-8395-a6746e7577e5", 00:09:37.067 "assigned_rate_limits": { 00:09:37.067 "rw_ios_per_sec": 0, 00:09:37.067 "rw_mbytes_per_sec": 0, 00:09:37.067 "r_mbytes_per_sec": 0, 00:09:37.067 "w_mbytes_per_sec": 0 00:09:37.067 }, 00:09:37.067 "claimed": false, 00:09:37.067 "zoned": false, 00:09:37.067 "supported_io_types": { 00:09:37.067 "read": true, 00:09:37.067 "write": true, 00:09:37.067 "unmap": true, 00:09:37.067 "flush": true, 00:09:37.067 "reset": true, 00:09:37.067 "nvme_admin": false, 00:09:37.067 "nvme_io": false, 00:09:37.067 "nvme_io_md": false, 00:09:37.067 "write_zeroes": true, 00:09:37.067 "zcopy": true, 00:09:37.067 "get_zone_info": false, 00:09:37.067 "zone_management": false, 00:09:37.067 "zone_append": false, 00:09:37.067 "compare": false, 00:09:37.067 "compare_and_write": false, 00:09:37.067 "abort": true, 00:09:37.067 "seek_hole": false, 00:09:37.067 "seek_data": false, 00:09:37.067 "copy": true, 00:09:37.067 "nvme_iov_md": false 00:09:37.067 }, 00:09:37.067 "memory_domains": [ 00:09:37.067 { 00:09:37.067 "dma_device_id": "system", 00:09:37.067 "dma_device_type": 1 00:09:37.067 }, 00:09:37.067 { 00:09:37.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.067 "dma_device_type": 2 00:09:37.067 } 00:09:37.067 ], 00:09:37.067 "driver_specific": {} 00:09:37.067 } 00:09:37.067 ]' 00:09:37.067 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:37.067 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:37.067 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.067 [2024-11-06 09:02:13.572542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:37.067 [2024-11-06 09:02:13.572640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.067 [2024-11-06 09:02:13.572673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:37.067 [2024-11-06 09:02:13.572696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.067 [2024-11-06 09:02:13.575882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.067 [2024-11-06 09:02:13.575934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:37.067 Passthru0 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.067 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.067 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.326 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.326 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:37.326 { 00:09:37.326 "name": "Malloc0", 00:09:37.326 "aliases": [ 00:09:37.326 "0d169c14-9aa0-464d-8395-a6746e7577e5" 00:09:37.326 ], 00:09:37.326 "product_name": "Malloc disk", 00:09:37.326 "block_size": 512, 00:09:37.327 "num_blocks": 16384, 00:09:37.327 "uuid": "0d169c14-9aa0-464d-8395-a6746e7577e5", 00:09:37.327 "assigned_rate_limits": { 00:09:37.327 "rw_ios_per_sec": 0, 00:09:37.327 "rw_mbytes_per_sec": 0, 00:09:37.327 "r_mbytes_per_sec": 0, 00:09:37.327 "w_mbytes_per_sec": 0 00:09:37.327 }, 00:09:37.327 "claimed": true, 00:09:37.327 "claim_type": "exclusive_write", 00:09:37.327 "zoned": false, 00:09:37.327 "supported_io_types": { 00:09:37.327 "read": true, 00:09:37.327 "write": true, 00:09:37.327 "unmap": true, 00:09:37.327 "flush": true, 00:09:37.327 "reset": true, 00:09:37.327 "nvme_admin": false, 00:09:37.327 "nvme_io": false, 00:09:37.327 "nvme_io_md": false, 00:09:37.327 "write_zeroes": true, 00:09:37.327 "zcopy": true, 00:09:37.327 "get_zone_info": false, 00:09:37.327 "zone_management": false, 00:09:37.327 "zone_append": false, 00:09:37.327 "compare": false, 00:09:37.327 "compare_and_write": false, 00:09:37.327 "abort": true, 00:09:37.327 "seek_hole": false, 00:09:37.327 "seek_data": false, 00:09:37.327 "copy": true, 00:09:37.327 "nvme_iov_md": false 00:09:37.327 }, 00:09:37.327 "memory_domains": [ 00:09:37.327 { 00:09:37.327 "dma_device_id": "system", 00:09:37.327 "dma_device_type": 1 00:09:37.327 }, 00:09:37.327 { 00:09:37.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.327 "dma_device_type": 2 00:09:37.327 } 00:09:37.327 ], 00:09:37.327 "driver_specific": {} 00:09:37.327 }, 00:09:37.327 { 00:09:37.327 "name": "Passthru0", 00:09:37.327 "aliases": [ 00:09:37.327 "57a8a233-432e-53f4-a3c3-5c276b1e832a" 00:09:37.327 ], 00:09:37.327 "product_name": "passthru", 00:09:37.327 "block_size": 512, 00:09:37.327 "num_blocks": 16384, 00:09:37.327 "uuid": "57a8a233-432e-53f4-a3c3-5c276b1e832a", 00:09:37.327 "assigned_rate_limits": { 00:09:37.327 "rw_ios_per_sec": 0, 00:09:37.327 "rw_mbytes_per_sec": 0, 00:09:37.327 "r_mbytes_per_sec": 0, 00:09:37.327 "w_mbytes_per_sec": 0 00:09:37.327 }, 00:09:37.327 "claimed": false, 00:09:37.327 "zoned": false, 00:09:37.327 "supported_io_types": { 00:09:37.327 "read": true, 00:09:37.327 "write": true, 00:09:37.327 "unmap": true, 00:09:37.327 "flush": true, 00:09:37.327 "reset": true, 00:09:37.327 "nvme_admin": false, 00:09:37.327 "nvme_io": false, 00:09:37.327 "nvme_io_md": false, 00:09:37.327 "write_zeroes": true, 00:09:37.327 "zcopy": true, 00:09:37.327 "get_zone_info": false, 00:09:37.327 "zone_management": false, 00:09:37.327 "zone_append": false, 00:09:37.327 "compare": false, 00:09:37.327 "compare_and_write": false, 00:09:37.327 "abort": true, 00:09:37.327 "seek_hole": false, 00:09:37.327 "seek_data": false, 00:09:37.327 "copy": true, 00:09:37.327 "nvme_iov_md": false 00:09:37.327 }, 00:09:37.327 "memory_domains": [ 00:09:37.327 { 00:09:37.327 "dma_device_id": "system", 00:09:37.327 "dma_device_type": 1 00:09:37.327 }, 00:09:37.327 { 00:09:37.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.327 "dma_device_type": 2 00:09:37.327 } 00:09:37.327 ], 00:09:37.327 "driver_specific": { 00:09:37.327 "passthru": { 00:09:37.327 "name": "Passthru0", 00:09:37.327 "base_bdev_name": "Malloc0" 00:09:37.327 } 00:09:37.327 } 00:09:37.327 } 00:09:37.327 ]' 00:09:37.327 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:37.327 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:37.327 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:37.327 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.327 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.327 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.327 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:37.327 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.327 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.327 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.327 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:37.327 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.327 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.327 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.327 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:37.327 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:37.327 ************************************ 00:09:37.327 END TEST rpc_integrity 00:09:37.327 ************************************ 00:09:37.327 09:02:13 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:37.327 00:09:37.327 real 0m0.368s 00:09:37.327 user 0m0.224s 00:09:37.327 sys 0m0.041s 00:09:37.327 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:37.327 09:02:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.327 09:02:13 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:37.327 09:02:13 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:37.327 09:02:13 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:37.327 09:02:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.327 ************************************ 00:09:37.327 START TEST rpc_plugins 00:09:37.327 ************************************ 00:09:37.327 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:09:37.327 09:02:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:37.327 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.327 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.327 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.327 09:02:13 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:37.327 09:02:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:37.327 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.327 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.586 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.586 09:02:13 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:37.586 { 00:09:37.586 "name": "Malloc1", 00:09:37.586 "aliases": [ 00:09:37.586 "7565ea16-2134-448a-a090-bcea5ac0c5c8" 00:09:37.586 ], 00:09:37.586 "product_name": "Malloc disk", 00:09:37.586 "block_size": 4096, 00:09:37.586 "num_blocks": 256, 00:09:37.586 "uuid": "7565ea16-2134-448a-a090-bcea5ac0c5c8", 00:09:37.586 "assigned_rate_limits": { 00:09:37.586 "rw_ios_per_sec": 0, 00:09:37.586 "rw_mbytes_per_sec": 0, 00:09:37.586 "r_mbytes_per_sec": 0, 00:09:37.586 "w_mbytes_per_sec": 0 00:09:37.586 }, 00:09:37.586 "claimed": false, 00:09:37.586 "zoned": false, 00:09:37.586 "supported_io_types": { 00:09:37.586 "read": true, 00:09:37.586 "write": true, 00:09:37.586 "unmap": true, 00:09:37.586 "flush": true, 00:09:37.586 "reset": true, 00:09:37.586 "nvme_admin": false, 00:09:37.586 "nvme_io": false, 00:09:37.586 "nvme_io_md": false, 00:09:37.586 "write_zeroes": true, 00:09:37.586 "zcopy": true, 00:09:37.586 "get_zone_info": false, 00:09:37.586 "zone_management": false, 00:09:37.586 "zone_append": false, 00:09:37.586 "compare": false, 00:09:37.586 "compare_and_write": false, 00:09:37.586 "abort": true, 00:09:37.586 "seek_hole": false, 00:09:37.586 "seek_data": false, 00:09:37.586 "copy": true, 00:09:37.586 "nvme_iov_md": false 00:09:37.586 }, 00:09:37.586 "memory_domains": [ 00:09:37.586 { 00:09:37.586 "dma_device_id": "system", 00:09:37.586 "dma_device_type": 1 00:09:37.586 }, 00:09:37.586 { 00:09:37.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.586 "dma_device_type": 2 00:09:37.586 } 00:09:37.586 ], 00:09:37.586 "driver_specific": {} 00:09:37.586 } 00:09:37.586 ]' 00:09:37.586 09:02:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:37.586 09:02:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:37.586 09:02:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:37.586 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.586 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.586 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.586 09:02:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:37.586 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.586 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.586 09:02:13 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.586 09:02:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:37.586 09:02:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:37.586 09:02:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:37.586 00:09:37.586 real 0m0.183s 00:09:37.586 user 0m0.116s 00:09:37.586 sys 0m0.024s 00:09:37.586 09:02:14 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:37.586 09:02:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:37.586 ************************************ 00:09:37.586 END TEST rpc_plugins 00:09:37.586 ************************************ 00:09:37.586 09:02:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:37.586 09:02:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:37.586 09:02:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:37.586 09:02:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.586 ************************************ 00:09:37.586 START TEST rpc_trace_cmd_test 00:09:37.586 ************************************ 00:09:37.586 09:02:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:09:37.586 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:37.586 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:37.586 09:02:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.586 09:02:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.586 09:02:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.586 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:37.586 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56768", 00:09:37.586 "tpoint_group_mask": "0x8", 00:09:37.586 "iscsi_conn": { 00:09:37.586 "mask": "0x2", 00:09:37.586 "tpoint_mask": "0x0" 00:09:37.586 }, 00:09:37.586 "scsi": { 00:09:37.586 "mask": "0x4", 00:09:37.586 "tpoint_mask": "0x0" 00:09:37.586 }, 00:09:37.586 "bdev": { 00:09:37.586 "mask": "0x8", 00:09:37.586 "tpoint_mask": "0xffffffffffffffff" 00:09:37.586 }, 00:09:37.586 "nvmf_rdma": { 00:09:37.586 "mask": "0x10", 00:09:37.586 "tpoint_mask": "0x0" 00:09:37.586 }, 00:09:37.586 "nvmf_tcp": { 00:09:37.586 "mask": "0x20", 00:09:37.586 "tpoint_mask": "0x0" 00:09:37.586 }, 00:09:37.586 "ftl": { 00:09:37.586 "mask": "0x40", 00:09:37.587 "tpoint_mask": "0x0" 00:09:37.587 }, 00:09:37.587 "blobfs": { 00:09:37.587 "mask": "0x80", 00:09:37.587 "tpoint_mask": "0x0" 00:09:37.587 }, 00:09:37.587 "dsa": { 00:09:37.587 "mask": "0x200", 00:09:37.587 "tpoint_mask": "0x0" 00:09:37.587 }, 00:09:37.587 "thread": { 00:09:37.587 "mask": "0x400", 00:09:37.587 "tpoint_mask": "0x0" 00:09:37.587 }, 00:09:37.587 "nvme_pcie": { 00:09:37.587 "mask": "0x800", 00:09:37.587 "tpoint_mask": "0x0" 00:09:37.587 }, 00:09:37.587 "iaa": { 00:09:37.587 "mask": "0x1000", 00:09:37.587 "tpoint_mask": "0x0" 00:09:37.587 }, 00:09:37.587 "nvme_tcp": { 00:09:37.587 "mask": "0x2000", 00:09:37.587 "tpoint_mask": "0x0" 00:09:37.587 }, 00:09:37.587 "bdev_nvme": { 00:09:37.587 "mask": "0x4000", 00:09:37.587 "tpoint_mask": "0x0" 00:09:37.587 }, 00:09:37.587 "sock": { 00:09:37.587 "mask": "0x8000", 00:09:37.587 "tpoint_mask": "0x0" 00:09:37.587 }, 00:09:37.587 "blob": { 00:09:37.587 "mask": "0x10000", 00:09:37.587 "tpoint_mask": "0x0" 00:09:37.587 }, 00:09:37.587 "bdev_raid": { 00:09:37.587 "mask": "0x20000", 00:09:37.587 "tpoint_mask": "0x0" 00:09:37.587 }, 00:09:37.587 "scheduler": { 00:09:37.587 "mask": "0x40000", 00:09:37.587 "tpoint_mask": "0x0" 00:09:37.587 } 00:09:37.587 }' 00:09:37.587 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:37.845 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:37.845 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:37.845 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:37.845 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:37.845 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:37.845 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:37.845 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:37.845 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:37.845 09:02:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:37.845 00:09:37.845 real 0m0.279s 00:09:37.845 user 0m0.241s 00:09:37.845 sys 0m0.029s 00:09:37.845 09:02:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:37.845 09:02:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.845 ************************************ 00:09:37.845 END TEST rpc_trace_cmd_test 00:09:37.845 ************************************ 00:09:38.104 09:02:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:38.105 09:02:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:38.105 09:02:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:38.105 09:02:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:38.105 09:02:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:38.105 09:02:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.105 ************************************ 00:09:38.105 START TEST rpc_daemon_integrity 00:09:38.105 ************************************ 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:38.105 { 00:09:38.105 "name": "Malloc2", 00:09:38.105 "aliases": [ 00:09:38.105 "1507f138-59b8-4bec-b3f8-1df4a75fe0b5" 00:09:38.105 ], 00:09:38.105 "product_name": "Malloc disk", 00:09:38.105 "block_size": 512, 00:09:38.105 "num_blocks": 16384, 00:09:38.105 "uuid": "1507f138-59b8-4bec-b3f8-1df4a75fe0b5", 00:09:38.105 "assigned_rate_limits": { 00:09:38.105 "rw_ios_per_sec": 0, 00:09:38.105 "rw_mbytes_per_sec": 0, 00:09:38.105 "r_mbytes_per_sec": 0, 00:09:38.105 "w_mbytes_per_sec": 0 00:09:38.105 }, 00:09:38.105 "claimed": false, 00:09:38.105 "zoned": false, 00:09:38.105 "supported_io_types": { 00:09:38.105 "read": true, 00:09:38.105 "write": true, 00:09:38.105 "unmap": true, 00:09:38.105 "flush": true, 00:09:38.105 "reset": true, 00:09:38.105 "nvme_admin": false, 00:09:38.105 "nvme_io": false, 00:09:38.105 "nvme_io_md": false, 00:09:38.105 "write_zeroes": true, 00:09:38.105 "zcopy": true, 00:09:38.105 "get_zone_info": false, 00:09:38.105 "zone_management": false, 00:09:38.105 "zone_append": false, 00:09:38.105 "compare": false, 00:09:38.105 "compare_and_write": false, 00:09:38.105 "abort": true, 00:09:38.105 "seek_hole": false, 00:09:38.105 "seek_data": false, 00:09:38.105 "copy": true, 00:09:38.105 "nvme_iov_md": false 00:09:38.105 }, 00:09:38.105 "memory_domains": [ 00:09:38.105 { 00:09:38.105 "dma_device_id": "system", 00:09:38.105 "dma_device_type": 1 00:09:38.105 }, 00:09:38.105 { 00:09:38.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.105 "dma_device_type": 2 00:09:38.105 } 00:09:38.105 ], 00:09:38.105 "driver_specific": {} 00:09:38.105 } 00:09:38.105 ]' 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.105 [2024-11-06 09:02:14.572270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:38.105 [2024-11-06 09:02:14.572367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.105 [2024-11-06 09:02:14.572411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:38.105 [2024-11-06 09:02:14.572429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.105 [2024-11-06 09:02:14.575615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.105 [2024-11-06 09:02:14.575696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:38.105 Passthru0 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:38.105 { 00:09:38.105 "name": "Malloc2", 00:09:38.105 "aliases": [ 00:09:38.105 "1507f138-59b8-4bec-b3f8-1df4a75fe0b5" 00:09:38.105 ], 00:09:38.105 "product_name": "Malloc disk", 00:09:38.105 "block_size": 512, 00:09:38.105 "num_blocks": 16384, 00:09:38.105 "uuid": "1507f138-59b8-4bec-b3f8-1df4a75fe0b5", 00:09:38.105 "assigned_rate_limits": { 00:09:38.105 "rw_ios_per_sec": 0, 00:09:38.105 "rw_mbytes_per_sec": 0, 00:09:38.105 "r_mbytes_per_sec": 0, 00:09:38.105 "w_mbytes_per_sec": 0 00:09:38.105 }, 00:09:38.105 "claimed": true, 00:09:38.105 "claim_type": "exclusive_write", 00:09:38.105 "zoned": false, 00:09:38.105 "supported_io_types": { 00:09:38.105 "read": true, 00:09:38.105 "write": true, 00:09:38.105 "unmap": true, 00:09:38.105 "flush": true, 00:09:38.105 "reset": true, 00:09:38.105 "nvme_admin": false, 00:09:38.105 "nvme_io": false, 00:09:38.105 "nvme_io_md": false, 00:09:38.105 "write_zeroes": true, 00:09:38.105 "zcopy": true, 00:09:38.105 "get_zone_info": false, 00:09:38.105 "zone_management": false, 00:09:38.105 "zone_append": false, 00:09:38.105 "compare": false, 00:09:38.105 "compare_and_write": false, 00:09:38.105 "abort": true, 00:09:38.105 "seek_hole": false, 00:09:38.105 "seek_data": false, 00:09:38.105 "copy": true, 00:09:38.105 "nvme_iov_md": false 00:09:38.105 }, 00:09:38.105 "memory_domains": [ 00:09:38.105 { 00:09:38.105 "dma_device_id": "system", 00:09:38.105 "dma_device_type": 1 00:09:38.105 }, 00:09:38.105 { 00:09:38.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.105 "dma_device_type": 2 00:09:38.105 } 00:09:38.105 ], 00:09:38.105 "driver_specific": {} 00:09:38.105 }, 00:09:38.105 { 00:09:38.105 "name": "Passthru0", 00:09:38.105 "aliases": [ 00:09:38.105 "6c6e0c68-8764-5ca2-ad9f-1d5afb0ba8f6" 00:09:38.105 ], 00:09:38.105 "product_name": "passthru", 00:09:38.105 "block_size": 512, 00:09:38.105 "num_blocks": 16384, 00:09:38.105 "uuid": "6c6e0c68-8764-5ca2-ad9f-1d5afb0ba8f6", 00:09:38.105 "assigned_rate_limits": { 00:09:38.105 "rw_ios_per_sec": 0, 00:09:38.105 "rw_mbytes_per_sec": 0, 00:09:38.105 "r_mbytes_per_sec": 0, 00:09:38.105 "w_mbytes_per_sec": 0 00:09:38.105 }, 00:09:38.105 "claimed": false, 00:09:38.105 "zoned": false, 00:09:38.105 "supported_io_types": { 00:09:38.105 "read": true, 00:09:38.105 "write": true, 00:09:38.105 "unmap": true, 00:09:38.105 "flush": true, 00:09:38.105 "reset": true, 00:09:38.105 "nvme_admin": false, 00:09:38.105 "nvme_io": false, 00:09:38.105 "nvme_io_md": false, 00:09:38.105 "write_zeroes": true, 00:09:38.105 "zcopy": true, 00:09:38.105 "get_zone_info": false, 00:09:38.105 "zone_management": false, 00:09:38.105 "zone_append": false, 00:09:38.105 "compare": false, 00:09:38.105 "compare_and_write": false, 00:09:38.105 "abort": true, 00:09:38.105 "seek_hole": false, 00:09:38.105 "seek_data": false, 00:09:38.105 "copy": true, 00:09:38.105 "nvme_iov_md": false 00:09:38.105 }, 00:09:38.105 "memory_domains": [ 00:09:38.105 { 00:09:38.105 "dma_device_id": "system", 00:09:38.105 "dma_device_type": 1 00:09:38.105 }, 00:09:38.105 { 00:09:38.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.105 "dma_device_type": 2 00:09:38.105 } 00:09:38.105 ], 00:09:38.105 "driver_specific": { 00:09:38.105 "passthru": { 00:09:38.105 "name": "Passthru0", 00:09:38.105 "base_bdev_name": "Malloc2" 00:09:38.105 } 00:09:38.105 } 00:09:38.105 } 00:09:38.105 ]' 00:09:38.105 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:38.365 00:09:38.365 real 0m0.352s 00:09:38.365 user 0m0.218s 00:09:38.365 sys 0m0.036s 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:38.365 ************************************ 00:09:38.365 END TEST rpc_daemon_integrity 00:09:38.365 09:02:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:38.365 ************************************ 00:09:38.365 09:02:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:38.365 09:02:14 rpc -- rpc/rpc.sh@84 -- # killprocess 56768 00:09:38.365 09:02:14 rpc -- common/autotest_common.sh@952 -- # '[' -z 56768 ']' 00:09:38.365 09:02:14 rpc -- common/autotest_common.sh@956 -- # kill -0 56768 00:09:38.365 09:02:14 rpc -- common/autotest_common.sh@957 -- # uname 00:09:38.365 09:02:14 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:38.365 09:02:14 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56768 00:09:38.365 09:02:14 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:38.365 09:02:14 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:38.365 09:02:14 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56768' 00:09:38.365 killing process with pid 56768 00:09:38.365 09:02:14 rpc -- common/autotest_common.sh@971 -- # kill 56768 00:09:38.365 09:02:14 rpc -- common/autotest_common.sh@976 -- # wait 56768 00:09:40.897 00:09:40.897 real 0m5.101s 00:09:40.897 user 0m5.784s 00:09:40.897 sys 0m0.907s 00:09:40.897 09:02:16 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:40.897 09:02:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.897 ************************************ 00:09:40.897 END TEST rpc 00:09:40.897 ************************************ 00:09:40.897 09:02:17 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:40.897 09:02:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:40.897 09:02:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:40.897 09:02:17 -- common/autotest_common.sh@10 -- # set +x 00:09:40.897 ************************************ 00:09:40.897 START TEST skip_rpc 00:09:40.897 ************************************ 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:40.897 * Looking for test storage... 00:09:40.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.897 09:02:17 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:40.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.897 --rc genhtml_branch_coverage=1 00:09:40.897 --rc genhtml_function_coverage=1 00:09:40.897 --rc genhtml_legend=1 00:09:40.897 --rc geninfo_all_blocks=1 00:09:40.897 --rc geninfo_unexecuted_blocks=1 00:09:40.897 00:09:40.897 ' 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:40.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.897 --rc genhtml_branch_coverage=1 00:09:40.897 --rc genhtml_function_coverage=1 00:09:40.897 --rc genhtml_legend=1 00:09:40.897 --rc geninfo_all_blocks=1 00:09:40.897 --rc geninfo_unexecuted_blocks=1 00:09:40.897 00:09:40.897 ' 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:40.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.897 --rc genhtml_branch_coverage=1 00:09:40.897 --rc genhtml_function_coverage=1 00:09:40.897 --rc genhtml_legend=1 00:09:40.897 --rc geninfo_all_blocks=1 00:09:40.897 --rc geninfo_unexecuted_blocks=1 00:09:40.897 00:09:40.897 ' 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:40.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.897 --rc genhtml_branch_coverage=1 00:09:40.897 --rc genhtml_function_coverage=1 00:09:40.897 --rc genhtml_legend=1 00:09:40.897 --rc geninfo_all_blocks=1 00:09:40.897 --rc geninfo_unexecuted_blocks=1 00:09:40.897 00:09:40.897 ' 00:09:40.897 09:02:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:40.897 09:02:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:40.897 09:02:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:40.897 09:02:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:40.897 ************************************ 00:09:40.897 START TEST skip_rpc 00:09:40.897 ************************************ 00:09:40.897 09:02:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:09:40.897 09:02:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56997 00:09:40.897 09:02:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:40.897 09:02:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:40.897 09:02:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:40.897 [2024-11-06 09:02:17.357513] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:09:40.897 [2024-11-06 09:02:17.357702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56997 ] 00:09:41.155 [2024-11-06 09:02:17.543539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.155 [2024-11-06 09:02:17.671148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.448 09:02:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56997 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56997 ']' 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56997 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56997 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:46.449 killing process with pid 56997 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56997' 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56997 00:09:46.449 09:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56997 00:09:48.349 00:09:48.349 real 0m7.241s 00:09:48.349 user 0m6.688s 00:09:48.349 sys 0m0.454s 00:09:48.349 09:02:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:48.349 ************************************ 00:09:48.349 END TEST skip_rpc 00:09:48.349 ************************************ 00:09:48.349 09:02:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.349 09:02:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:48.349 09:02:24 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:48.350 09:02:24 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:48.350 09:02:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:48.350 ************************************ 00:09:48.350 START TEST skip_rpc_with_json 00:09:48.350 ************************************ 00:09:48.350 09:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:09:48.350 09:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:48.350 09:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57101 00:09:48.350 09:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:48.350 09:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:48.350 09:02:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57101 00:09:48.350 09:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57101 ']' 00:09:48.350 09:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.350 09:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:48.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.350 09:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.350 09:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:48.350 09:02:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:48.350 [2024-11-06 09:02:24.612106] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:09:48.350 [2024-11-06 09:02:24.612258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57101 ] 00:09:48.350 [2024-11-06 09:02:24.785416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.608 [2024-11-06 09:02:24.905808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.553 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:49.553 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:09:49.553 09:02:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:49.553 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.554 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:49.554 [2024-11-06 09:02:25.754274] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:49.554 request: 00:09:49.554 { 00:09:49.554 "trtype": "tcp", 00:09:49.554 "method": "nvmf_get_transports", 00:09:49.554 "req_id": 1 00:09:49.554 } 00:09:49.554 Got JSON-RPC error response 00:09:49.554 response: 00:09:49.554 { 00:09:49.554 "code": -19, 00:09:49.554 "message": "No such device" 00:09:49.554 } 00:09:49.554 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:49.554 09:02:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:49.554 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.554 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:49.554 [2024-11-06 09:02:25.766406] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.554 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.554 09:02:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:49.554 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.554 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:49.554 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.554 09:02:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:49.554 { 00:09:49.554 "subsystems": [ 00:09:49.554 { 00:09:49.554 "subsystem": "fsdev", 00:09:49.554 "config": [ 00:09:49.554 { 00:09:49.554 "method": "fsdev_set_opts", 00:09:49.554 "params": { 00:09:49.554 "fsdev_io_pool_size": 65535, 00:09:49.554 "fsdev_io_cache_size": 256 00:09:49.554 } 00:09:49.554 } 00:09:49.554 ] 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "subsystem": "keyring", 00:09:49.554 "config": [] 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "subsystem": "iobuf", 00:09:49.554 "config": [ 00:09:49.554 { 00:09:49.554 "method": "iobuf_set_options", 00:09:49.554 "params": { 00:09:49.554 "small_pool_count": 8192, 00:09:49.554 "large_pool_count": 1024, 00:09:49.554 "small_bufsize": 8192, 00:09:49.554 "large_bufsize": 135168, 00:09:49.554 "enable_numa": false 00:09:49.554 } 00:09:49.554 } 00:09:49.554 ] 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "subsystem": "sock", 00:09:49.554 "config": [ 00:09:49.554 { 00:09:49.554 "method": "sock_set_default_impl", 00:09:49.554 "params": { 00:09:49.554 "impl_name": "posix" 00:09:49.554 } 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "method": "sock_impl_set_options", 00:09:49.554 "params": { 00:09:49.554 "impl_name": "ssl", 00:09:49.554 "recv_buf_size": 4096, 00:09:49.554 "send_buf_size": 4096, 00:09:49.554 "enable_recv_pipe": true, 00:09:49.554 "enable_quickack": false, 00:09:49.554 "enable_placement_id": 0, 00:09:49.554 "enable_zerocopy_send_server": true, 00:09:49.554 "enable_zerocopy_send_client": false, 00:09:49.554 "zerocopy_threshold": 0, 00:09:49.554 "tls_version": 0, 00:09:49.554 "enable_ktls": false 00:09:49.554 } 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "method": "sock_impl_set_options", 00:09:49.554 "params": { 00:09:49.554 "impl_name": "posix", 00:09:49.554 "recv_buf_size": 2097152, 00:09:49.554 "send_buf_size": 2097152, 00:09:49.554 "enable_recv_pipe": true, 00:09:49.554 "enable_quickack": false, 00:09:49.554 "enable_placement_id": 0, 00:09:49.554 "enable_zerocopy_send_server": true, 00:09:49.554 "enable_zerocopy_send_client": false, 00:09:49.554 "zerocopy_threshold": 0, 00:09:49.554 "tls_version": 0, 00:09:49.554 "enable_ktls": false 00:09:49.554 } 00:09:49.554 } 00:09:49.554 ] 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "subsystem": "vmd", 00:09:49.554 "config": [] 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "subsystem": "accel", 00:09:49.554 "config": [ 00:09:49.554 { 00:09:49.554 "method": "accel_set_options", 00:09:49.554 "params": { 00:09:49.554 "small_cache_size": 128, 00:09:49.554 "large_cache_size": 16, 00:09:49.554 "task_count": 2048, 00:09:49.554 "sequence_count": 2048, 00:09:49.554 "buf_count": 2048 00:09:49.554 } 00:09:49.554 } 00:09:49.554 ] 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "subsystem": "bdev", 00:09:49.554 "config": [ 00:09:49.554 { 00:09:49.554 "method": "bdev_set_options", 00:09:49.554 "params": { 00:09:49.554 "bdev_io_pool_size": 65535, 00:09:49.554 "bdev_io_cache_size": 256, 00:09:49.554 "bdev_auto_examine": true, 00:09:49.554 "iobuf_small_cache_size": 128, 00:09:49.554 "iobuf_large_cache_size": 16 00:09:49.554 } 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "method": "bdev_raid_set_options", 00:09:49.554 "params": { 00:09:49.554 "process_window_size_kb": 1024, 00:09:49.554 "process_max_bandwidth_mb_sec": 0 00:09:49.554 } 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "method": "bdev_iscsi_set_options", 00:09:49.554 "params": { 00:09:49.554 "timeout_sec": 30 00:09:49.554 } 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "method": "bdev_nvme_set_options", 00:09:49.554 "params": { 00:09:49.554 "action_on_timeout": "none", 00:09:49.554 "timeout_us": 0, 00:09:49.554 "timeout_admin_us": 0, 00:09:49.554 "keep_alive_timeout_ms": 10000, 00:09:49.554 "arbitration_burst": 0, 00:09:49.554 "low_priority_weight": 0, 00:09:49.554 "medium_priority_weight": 0, 00:09:49.554 "high_priority_weight": 0, 00:09:49.554 "nvme_adminq_poll_period_us": 10000, 00:09:49.554 "nvme_ioq_poll_period_us": 0, 00:09:49.554 "io_queue_requests": 0, 00:09:49.554 "delay_cmd_submit": true, 00:09:49.554 "transport_retry_count": 4, 00:09:49.554 "bdev_retry_count": 3, 00:09:49.554 "transport_ack_timeout": 0, 00:09:49.554 "ctrlr_loss_timeout_sec": 0, 00:09:49.554 "reconnect_delay_sec": 0, 00:09:49.554 "fast_io_fail_timeout_sec": 0, 00:09:49.554 "disable_auto_failback": false, 00:09:49.554 "generate_uuids": false, 00:09:49.554 "transport_tos": 0, 00:09:49.554 "nvme_error_stat": false, 00:09:49.554 "rdma_srq_size": 0, 00:09:49.554 "io_path_stat": false, 00:09:49.554 "allow_accel_sequence": false, 00:09:49.554 "rdma_max_cq_size": 0, 00:09:49.554 "rdma_cm_event_timeout_ms": 0, 00:09:49.554 "dhchap_digests": [ 00:09:49.554 "sha256", 00:09:49.554 "sha384", 00:09:49.554 "sha512" 00:09:49.554 ], 00:09:49.554 "dhchap_dhgroups": [ 00:09:49.554 "null", 00:09:49.554 "ffdhe2048", 00:09:49.554 "ffdhe3072", 00:09:49.554 "ffdhe4096", 00:09:49.554 "ffdhe6144", 00:09:49.554 "ffdhe8192" 00:09:49.554 ] 00:09:49.554 } 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "method": "bdev_nvme_set_hotplug", 00:09:49.554 "params": { 00:09:49.554 "period_us": 100000, 00:09:49.554 "enable": false 00:09:49.554 } 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "method": "bdev_wait_for_examine" 00:09:49.554 } 00:09:49.554 ] 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "subsystem": "scsi", 00:09:49.554 "config": null 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "subsystem": "scheduler", 00:09:49.554 "config": [ 00:09:49.554 { 00:09:49.554 "method": "framework_set_scheduler", 00:09:49.554 "params": { 00:09:49.554 "name": "static" 00:09:49.554 } 00:09:49.554 } 00:09:49.554 ] 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "subsystem": "vhost_scsi", 00:09:49.554 "config": [] 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "subsystem": "vhost_blk", 00:09:49.554 "config": [] 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "subsystem": "ublk", 00:09:49.554 "config": [] 00:09:49.554 }, 00:09:49.554 { 00:09:49.554 "subsystem": "nbd", 00:09:49.554 "config": [] 00:09:49.555 }, 00:09:49.555 { 00:09:49.555 "subsystem": "nvmf", 00:09:49.555 "config": [ 00:09:49.555 { 00:09:49.555 "method": "nvmf_set_config", 00:09:49.555 "params": { 00:09:49.555 "discovery_filter": "match_any", 00:09:49.555 "admin_cmd_passthru": { 00:09:49.555 "identify_ctrlr": false 00:09:49.555 }, 00:09:49.555 "dhchap_digests": [ 00:09:49.555 "sha256", 00:09:49.555 "sha384", 00:09:49.555 "sha512" 00:09:49.555 ], 00:09:49.555 "dhchap_dhgroups": [ 00:09:49.555 "null", 00:09:49.555 "ffdhe2048", 00:09:49.555 "ffdhe3072", 00:09:49.555 "ffdhe4096", 00:09:49.555 "ffdhe6144", 00:09:49.555 "ffdhe8192" 00:09:49.555 ] 00:09:49.555 } 00:09:49.555 }, 00:09:49.555 { 00:09:49.555 "method": "nvmf_set_max_subsystems", 00:09:49.555 "params": { 00:09:49.555 "max_subsystems": 1024 00:09:49.555 } 00:09:49.555 }, 00:09:49.555 { 00:09:49.555 "method": "nvmf_set_crdt", 00:09:49.555 "params": { 00:09:49.555 "crdt1": 0, 00:09:49.555 "crdt2": 0, 00:09:49.555 "crdt3": 0 00:09:49.555 } 00:09:49.555 }, 00:09:49.555 { 00:09:49.555 "method": "nvmf_create_transport", 00:09:49.555 "params": { 00:09:49.555 "trtype": "TCP", 00:09:49.555 "max_queue_depth": 128, 00:09:49.555 "max_io_qpairs_per_ctrlr": 127, 00:09:49.555 "in_capsule_data_size": 4096, 00:09:49.555 "max_io_size": 131072, 00:09:49.555 "io_unit_size": 131072, 00:09:49.555 "max_aq_depth": 128, 00:09:49.555 "num_shared_buffers": 511, 00:09:49.555 "buf_cache_size": 4294967295, 00:09:49.555 "dif_insert_or_strip": false, 00:09:49.555 "zcopy": false, 00:09:49.555 "c2h_success": true, 00:09:49.555 "sock_priority": 0, 00:09:49.555 "abort_timeout_sec": 1, 00:09:49.555 "ack_timeout": 0, 00:09:49.555 "data_wr_pool_size": 0 00:09:49.555 } 00:09:49.555 } 00:09:49.555 ] 00:09:49.555 }, 00:09:49.555 { 00:09:49.555 "subsystem": "iscsi", 00:09:49.555 "config": [ 00:09:49.555 { 00:09:49.555 "method": "iscsi_set_options", 00:09:49.555 "params": { 00:09:49.555 "node_base": "iqn.2016-06.io.spdk", 00:09:49.555 "max_sessions": 128, 00:09:49.555 "max_connections_per_session": 2, 00:09:49.555 "max_queue_depth": 64, 00:09:49.555 "default_time2wait": 2, 00:09:49.555 "default_time2retain": 20, 00:09:49.555 "first_burst_length": 8192, 00:09:49.555 "immediate_data": true, 00:09:49.555 "allow_duplicated_isid": false, 00:09:49.555 "error_recovery_level": 0, 00:09:49.555 "nop_timeout": 60, 00:09:49.555 "nop_in_interval": 30, 00:09:49.555 "disable_chap": false, 00:09:49.555 "require_chap": false, 00:09:49.555 "mutual_chap": false, 00:09:49.555 "chap_group": 0, 00:09:49.555 "max_large_datain_per_connection": 64, 00:09:49.555 "max_r2t_per_connection": 4, 00:09:49.555 "pdu_pool_size": 36864, 00:09:49.555 "immediate_data_pool_size": 16384, 00:09:49.555 "data_out_pool_size": 2048 00:09:49.555 } 00:09:49.555 } 00:09:49.555 ] 00:09:49.555 } 00:09:49.555 ] 00:09:49.555 } 00:09:49.555 09:02:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:49.555 09:02:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57101 00:09:49.555 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57101 ']' 00:09:49.555 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57101 00:09:49.555 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:09:49.555 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:49.555 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57101 00:09:49.555 killing process with pid 57101 00:09:49.555 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:49.555 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:49.555 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57101' 00:09:49.555 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57101 00:09:49.555 09:02:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57101 00:09:52.088 09:02:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57152 00:09:52.088 09:02:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:52.088 09:02:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:57.387 09:02:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57152 00:09:57.387 09:02:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57152 ']' 00:09:57.387 09:02:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57152 00:09:57.387 09:02:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:09:57.387 09:02:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:57.387 09:02:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57152 00:09:57.387 09:02:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:57.387 killing process with pid 57152 00:09:57.387 09:02:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:57.387 09:02:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57152' 00:09:57.387 09:02:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57152 00:09:57.387 09:02:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57152 00:09:59.292 09:02:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:59.292 09:02:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:59.292 ************************************ 00:09:59.292 END TEST skip_rpc_with_json 00:09:59.292 ************************************ 00:09:59.292 00:09:59.292 real 0m10.949s 00:09:59.292 user 0m10.388s 00:09:59.292 sys 0m0.969s 00:09:59.292 09:02:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:59.292 09:02:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:59.293 09:02:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:59.293 09:02:35 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:59.293 09:02:35 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.293 09:02:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.293 ************************************ 00:09:59.293 START TEST skip_rpc_with_delay 00:09:59.293 ************************************ 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:59.293 [2024-11-06 09:02:35.626438] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:59.293 00:09:59.293 real 0m0.188s 00:09:59.293 user 0m0.109s 00:09:59.293 sys 0m0.078s 00:09:59.293 ************************************ 00:09:59.293 END TEST skip_rpc_with_delay 00:09:59.293 ************************************ 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:59.293 09:02:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:59.293 09:02:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:59.293 09:02:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:59.293 09:02:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:59.293 09:02:35 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:59.293 09:02:35 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.293 09:02:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:59.293 ************************************ 00:09:59.293 START TEST exit_on_failed_rpc_init 00:09:59.293 ************************************ 00:09:59.293 09:02:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:09:59.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.294 09:02:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57280 00:09:59.294 09:02:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57280 00:09:59.294 09:02:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:59.294 09:02:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57280 ']' 00:09:59.294 09:02:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.294 09:02:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:59.294 09:02:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.294 09:02:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:59.294 09:02:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:59.567 [2024-11-06 09:02:35.875220] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:09:59.567 [2024-11-06 09:02:35.875390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57280 ] 00:09:59.567 [2024-11-06 09:02:36.057614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.829 [2024-11-06 09:02:36.193003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:10:00.765 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:10:00.765 [2024-11-06 09:02:37.181380] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:10:00.765 [2024-11-06 09:02:37.181543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57309 ] 00:10:01.023 [2024-11-06 09:02:37.365620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.023 [2024-11-06 09:02:37.526679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.023 [2024-11-06 09:02:37.526868] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:10:01.023 [2024-11-06 09:02:37.526897] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:10:01.023 [2024-11-06 09:02:37.526920] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57280 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57280 ']' 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57280 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:01.282 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57280 00:10:01.540 killing process with pid 57280 00:10:01.540 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:01.540 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:01.540 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57280' 00:10:01.540 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57280 00:10:01.540 09:02:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57280 00:10:04.103 00:10:04.103 real 0m4.302s 00:10:04.103 user 0m4.768s 00:10:04.103 sys 0m0.711s 00:10:04.103 ************************************ 00:10:04.103 END TEST exit_on_failed_rpc_init 00:10:04.103 ************************************ 00:10:04.103 09:02:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.103 09:02:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:10:04.103 09:02:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:10:04.103 00:10:04.103 real 0m23.049s 00:10:04.103 user 0m22.116s 00:10:04.103 sys 0m2.416s 00:10:04.103 09:02:40 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.103 ************************************ 00:10:04.103 END TEST skip_rpc 00:10:04.103 ************************************ 00:10:04.103 09:02:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.103 09:02:40 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:04.103 09:02:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:04.103 09:02:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.103 09:02:40 -- common/autotest_common.sh@10 -- # set +x 00:10:04.103 ************************************ 00:10:04.103 START TEST rpc_client 00:10:04.103 ************************************ 00:10:04.103 09:02:40 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:10:04.103 * Looking for test storage... 00:10:04.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:10:04.103 09:02:40 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.103 09:02:40 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.103 09:02:40 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:04.103 09:02:40 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@345 -- # : 1 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@353 -- # local d=1 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@355 -- # echo 1 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@353 -- # local d=2 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@355 -- # echo 2 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.103 09:02:40 rpc_client -- scripts/common.sh@368 -- # return 0 00:10:04.103 09:02:40 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.103 09:02:40 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:04.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.103 --rc genhtml_branch_coverage=1 00:10:04.103 --rc genhtml_function_coverage=1 00:10:04.103 --rc genhtml_legend=1 00:10:04.103 --rc geninfo_all_blocks=1 00:10:04.103 --rc geninfo_unexecuted_blocks=1 00:10:04.103 00:10:04.103 ' 00:10:04.103 09:02:40 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:04.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.103 --rc genhtml_branch_coverage=1 00:10:04.103 --rc genhtml_function_coverage=1 00:10:04.103 --rc genhtml_legend=1 00:10:04.103 --rc geninfo_all_blocks=1 00:10:04.103 --rc geninfo_unexecuted_blocks=1 00:10:04.103 00:10:04.103 ' 00:10:04.103 09:02:40 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:04.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.103 --rc genhtml_branch_coverage=1 00:10:04.103 --rc genhtml_function_coverage=1 00:10:04.103 --rc genhtml_legend=1 00:10:04.103 --rc geninfo_all_blocks=1 00:10:04.103 --rc geninfo_unexecuted_blocks=1 00:10:04.103 00:10:04.103 ' 00:10:04.103 09:02:40 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:04.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.103 --rc genhtml_branch_coverage=1 00:10:04.103 --rc genhtml_function_coverage=1 00:10:04.103 --rc genhtml_legend=1 00:10:04.103 --rc geninfo_all_blocks=1 00:10:04.103 --rc geninfo_unexecuted_blocks=1 00:10:04.103 00:10:04.103 ' 00:10:04.103 09:02:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:10:04.103 OK 00:10:04.103 09:02:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:10:04.103 ************************************ 00:10:04.103 END TEST rpc_client 00:10:04.103 ************************************ 00:10:04.103 00:10:04.103 real 0m0.260s 00:10:04.103 user 0m0.153s 00:10:04.103 sys 0m0.115s 00:10:04.103 09:02:40 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.103 09:02:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:10:04.103 09:02:40 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:04.103 09:02:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:04.103 09:02:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.103 09:02:40 -- common/autotest_common.sh@10 -- # set +x 00:10:04.103 ************************************ 00:10:04.103 START TEST json_config 00:10:04.103 ************************************ 00:10:04.104 09:02:40 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:10:04.104 09:02:40 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.104 09:02:40 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:04.104 09:02:40 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.104 09:02:40 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:04.104 09:02:40 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.104 09:02:40 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.104 09:02:40 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.104 09:02:40 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.104 09:02:40 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.104 09:02:40 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.104 09:02:40 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.104 09:02:40 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.104 09:02:40 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.104 09:02:40 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.104 09:02:40 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.104 09:02:40 json_config -- scripts/common.sh@344 -- # case "$op" in 00:10:04.104 09:02:40 json_config -- scripts/common.sh@345 -- # : 1 00:10:04.104 09:02:40 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.104 09:02:40 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.104 09:02:40 json_config -- scripts/common.sh@365 -- # decimal 1 00:10:04.104 09:02:40 json_config -- scripts/common.sh@353 -- # local d=1 00:10:04.104 09:02:40 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.104 09:02:40 json_config -- scripts/common.sh@355 -- # echo 1 00:10:04.104 09:02:40 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.104 09:02:40 json_config -- scripts/common.sh@366 -- # decimal 2 00:10:04.104 09:02:40 json_config -- scripts/common.sh@353 -- # local d=2 00:10:04.104 09:02:40 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.104 09:02:40 json_config -- scripts/common.sh@355 -- # echo 2 00:10:04.104 09:02:40 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.104 09:02:40 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.104 09:02:40 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.104 09:02:40 json_config -- scripts/common.sh@368 -- # return 0 00:10:04.104 09:02:40 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.104 09:02:40 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:04.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.104 --rc genhtml_branch_coverage=1 00:10:04.104 --rc genhtml_function_coverage=1 00:10:04.104 --rc genhtml_legend=1 00:10:04.104 --rc geninfo_all_blocks=1 00:10:04.104 --rc geninfo_unexecuted_blocks=1 00:10:04.104 00:10:04.104 ' 00:10:04.104 09:02:40 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:04.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.104 --rc genhtml_branch_coverage=1 00:10:04.104 --rc genhtml_function_coverage=1 00:10:04.104 --rc genhtml_legend=1 00:10:04.104 --rc geninfo_all_blocks=1 00:10:04.104 --rc geninfo_unexecuted_blocks=1 00:10:04.104 00:10:04.104 ' 00:10:04.104 09:02:40 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:04.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.104 --rc genhtml_branch_coverage=1 00:10:04.104 --rc genhtml_function_coverage=1 00:10:04.104 --rc genhtml_legend=1 00:10:04.104 --rc geninfo_all_blocks=1 00:10:04.104 --rc geninfo_unexecuted_blocks=1 00:10:04.104 00:10:04.104 ' 00:10:04.104 09:02:40 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:04.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.104 --rc genhtml_branch_coverage=1 00:10:04.104 --rc genhtml_function_coverage=1 00:10:04.104 --rc genhtml_legend=1 00:10:04.104 --rc geninfo_all_blocks=1 00:10:04.104 --rc geninfo_unexecuted_blocks=1 00:10:04.104 00:10:04.104 ' 00:10:04.104 09:02:40 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@7 -- # uname -s 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:36a6fb36-c4b3-4edd-b999-d3b5472e1260 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=36a6fb36-c4b3-4edd-b999-d3b5472e1260 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.104 09:02:40 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.104 09:02:40 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.104 09:02:40 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.104 09:02:40 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.104 09:02:40 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.104 09:02:40 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.104 09:02:40 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.104 09:02:40 json_config -- paths/export.sh@5 -- # export PATH 00:10:04.104 09:02:40 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@51 -- # : 0 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.104 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.104 09:02:40 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.104 WARNING: No tests are enabled so not running JSON configuration tests 00:10:04.104 09:02:40 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:04.104 09:02:40 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:10:04.104 09:02:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:10:04.104 09:02:40 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:10:04.104 09:02:40 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:10:04.104 09:02:40 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:10:04.104 09:02:40 json_config -- json_config/json_config.sh@28 -- # exit 0 00:10:04.104 ************************************ 00:10:04.104 END TEST json_config 00:10:04.104 ************************************ 00:10:04.104 00:10:04.104 real 0m0.182s 00:10:04.104 user 0m0.129s 00:10:04.104 sys 0m0.057s 00:10:04.104 09:02:40 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:04.104 09:02:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:04.365 09:02:40 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:04.365 09:02:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:04.365 09:02:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:04.365 09:02:40 -- common/autotest_common.sh@10 -- # set +x 00:10:04.365 ************************************ 00:10:04.365 START TEST json_config_extra_key 00:10:04.365 ************************************ 00:10:04.365 09:02:40 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:04.365 09:02:40 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:04.365 09:02:40 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:10:04.365 09:02:40 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:04.365 09:02:40 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:10:04.365 09:02:40 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.365 09:02:40 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:04.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.365 --rc genhtml_branch_coverage=1 00:10:04.365 --rc genhtml_function_coverage=1 00:10:04.365 --rc genhtml_legend=1 00:10:04.365 --rc geninfo_all_blocks=1 00:10:04.365 --rc geninfo_unexecuted_blocks=1 00:10:04.365 00:10:04.365 ' 00:10:04.365 09:02:40 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:04.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.365 --rc genhtml_branch_coverage=1 00:10:04.365 --rc genhtml_function_coverage=1 00:10:04.365 --rc genhtml_legend=1 00:10:04.365 --rc geninfo_all_blocks=1 00:10:04.365 --rc geninfo_unexecuted_blocks=1 00:10:04.365 00:10:04.365 ' 00:10:04.365 09:02:40 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:04.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.365 --rc genhtml_branch_coverage=1 00:10:04.365 --rc genhtml_function_coverage=1 00:10:04.365 --rc genhtml_legend=1 00:10:04.365 --rc geninfo_all_blocks=1 00:10:04.365 --rc geninfo_unexecuted_blocks=1 00:10:04.365 00:10:04.365 ' 00:10:04.365 09:02:40 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:04.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.365 --rc genhtml_branch_coverage=1 00:10:04.365 --rc genhtml_function_coverage=1 00:10:04.365 --rc genhtml_legend=1 00:10:04.365 --rc geninfo_all_blocks=1 00:10:04.365 --rc geninfo_unexecuted_blocks=1 00:10:04.365 00:10:04.365 ' 00:10:04.365 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:36a6fb36-c4b3-4edd-b999-d3b5472e1260 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=36a6fb36-c4b3-4edd-b999-d3b5472e1260 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.365 09:02:40 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.365 09:02:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.365 09:02:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.365 09:02:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.365 09:02:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:04.365 09:02:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.365 09:02:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.366 09:02:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.366 09:02:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.366 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.366 09:02:40 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.366 09:02:40 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.366 09:02:40 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.366 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:04.366 INFO: launching applications... 00:10:04.366 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:04.366 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:04.366 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:04.366 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:04.366 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:04.366 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:04.366 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:04.366 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:04.366 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:04.366 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:04.366 09:02:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:04.366 09:02:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:04.366 09:02:40 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:04.366 09:02:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:04.366 09:02:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:04.366 09:02:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:04.366 09:02:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:04.366 09:02:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:04.366 09:02:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57508 00:10:04.366 09:02:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:04.366 09:02:40 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:04.366 Waiting for target to run... 00:10:04.366 09:02:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57508 /var/tmp/spdk_tgt.sock 00:10:04.366 09:02:40 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57508 ']' 00:10:04.366 09:02:40 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:04.366 09:02:40 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:04.366 09:02:40 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:04.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:04.366 09:02:40 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:04.366 09:02:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:04.624 [2024-11-06 09:02:41.004696] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:10:04.624 [2024-11-06 09:02:41.005051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57508 ] 00:10:05.191 [2024-11-06 09:02:41.484037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.191 [2024-11-06 09:02:41.634456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.128 09:02:42 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:06.129 00:10:06.129 INFO: shutting down applications... 00:10:06.129 09:02:42 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:10:06.129 09:02:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:06.129 09:02:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:06.129 09:02:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:06.129 09:02:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:06.129 09:02:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:06.129 09:02:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57508 ]] 00:10:06.129 09:02:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57508 00:10:06.129 09:02:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:06.129 09:02:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:06.129 09:02:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57508 00:10:06.129 09:02:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:06.388 09:02:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:06.388 09:02:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:06.388 09:02:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57508 00:10:06.388 09:02:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:06.968 09:02:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:06.968 09:02:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:06.968 09:02:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57508 00:10:06.968 09:02:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:07.538 09:02:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:07.538 09:02:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:07.538 09:02:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57508 00:10:07.538 09:02:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:08.110 09:02:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:08.110 09:02:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:08.110 09:02:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57508 00:10:08.110 09:02:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:08.368 09:02:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:08.368 09:02:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:08.368 09:02:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57508 00:10:08.368 09:02:44 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:08.935 09:02:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:08.935 09:02:45 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:08.935 09:02:45 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57508 00:10:08.935 SPDK target shutdown done 00:10:08.935 Success 00:10:08.935 09:02:45 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:08.935 09:02:45 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:08.935 09:02:45 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:08.935 09:02:45 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:08.935 09:02:45 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:08.935 00:10:08.935 real 0m4.692s 00:10:08.935 user 0m4.113s 00:10:08.935 sys 0m0.696s 00:10:08.935 ************************************ 00:10:08.935 END TEST json_config_extra_key 00:10:08.935 ************************************ 00:10:08.935 09:02:45 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:08.935 09:02:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:08.935 09:02:45 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:08.935 09:02:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:08.936 09:02:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:08.936 09:02:45 -- common/autotest_common.sh@10 -- # set +x 00:10:08.936 ************************************ 00:10:08.936 START TEST alias_rpc 00:10:08.936 ************************************ 00:10:08.936 09:02:45 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:09.195 * Looking for test storage... 00:10:09.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@345 -- # : 1 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.195 09:02:45 alias_rpc -- scripts/common.sh@368 -- # return 0 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.195 --rc genhtml_branch_coverage=1 00:10:09.195 --rc genhtml_function_coverage=1 00:10:09.195 --rc genhtml_legend=1 00:10:09.195 --rc geninfo_all_blocks=1 00:10:09.195 --rc geninfo_unexecuted_blocks=1 00:10:09.195 00:10:09.195 ' 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.195 --rc genhtml_branch_coverage=1 00:10:09.195 --rc genhtml_function_coverage=1 00:10:09.195 --rc genhtml_legend=1 00:10:09.195 --rc geninfo_all_blocks=1 00:10:09.195 --rc geninfo_unexecuted_blocks=1 00:10:09.195 00:10:09.195 ' 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.195 --rc genhtml_branch_coverage=1 00:10:09.195 --rc genhtml_function_coverage=1 00:10:09.195 --rc genhtml_legend=1 00:10:09.195 --rc geninfo_all_blocks=1 00:10:09.195 --rc geninfo_unexecuted_blocks=1 00:10:09.195 00:10:09.195 ' 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.195 --rc genhtml_branch_coverage=1 00:10:09.195 --rc genhtml_function_coverage=1 00:10:09.195 --rc genhtml_legend=1 00:10:09.195 --rc geninfo_all_blocks=1 00:10:09.195 --rc geninfo_unexecuted_blocks=1 00:10:09.195 00:10:09.195 ' 00:10:09.195 09:02:45 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:09.195 09:02:45 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57625 00:10:09.195 09:02:45 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57625 00:10:09.195 09:02:45 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57625 ']' 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:09.195 09:02:45 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.454 [2024-11-06 09:02:45.766628] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:10:09.454 [2024-11-06 09:02:45.767141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57625 ] 00:10:09.454 [2024-11-06 09:02:45.958227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.712 [2024-11-06 09:02:46.115574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.678 09:02:46 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:10.678 09:02:46 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:10.678 09:02:46 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:10.936 09:02:47 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57625 00:10:10.936 09:02:47 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57625 ']' 00:10:10.936 09:02:47 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57625 00:10:10.936 09:02:47 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:10:10.936 09:02:47 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:10.937 09:02:47 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57625 00:10:10.937 killing process with pid 57625 00:10:10.937 09:02:47 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:10.937 09:02:47 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:10.937 09:02:47 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57625' 00:10:10.937 09:02:47 alias_rpc -- common/autotest_common.sh@971 -- # kill 57625 00:10:10.937 09:02:47 alias_rpc -- common/autotest_common.sh@976 -- # wait 57625 00:10:13.473 ************************************ 00:10:13.473 END TEST alias_rpc 00:10:13.473 ************************************ 00:10:13.473 00:10:13.473 real 0m4.210s 00:10:13.473 user 0m4.321s 00:10:13.473 sys 0m0.647s 00:10:13.473 09:02:49 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:13.473 09:02:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.473 09:02:49 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:10:13.473 09:02:49 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:13.473 09:02:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:13.473 09:02:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:13.473 09:02:49 -- common/autotest_common.sh@10 -- # set +x 00:10:13.473 ************************************ 00:10:13.473 START TEST spdkcli_tcp 00:10:13.473 ************************************ 00:10:13.473 09:02:49 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:13.473 * Looking for test storage... 00:10:13.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:13.473 09:02:49 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:13.473 09:02:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:10:13.473 09:02:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:13.473 09:02:49 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.473 09:02:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:10:13.473 09:02:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.473 09:02:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:13.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.473 --rc genhtml_branch_coverage=1 00:10:13.473 --rc genhtml_function_coverage=1 00:10:13.473 --rc genhtml_legend=1 00:10:13.473 --rc geninfo_all_blocks=1 00:10:13.473 --rc geninfo_unexecuted_blocks=1 00:10:13.473 00:10:13.473 ' 00:10:13.473 09:02:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:13.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.473 --rc genhtml_branch_coverage=1 00:10:13.473 --rc genhtml_function_coverage=1 00:10:13.473 --rc genhtml_legend=1 00:10:13.473 --rc geninfo_all_blocks=1 00:10:13.473 --rc geninfo_unexecuted_blocks=1 00:10:13.473 00:10:13.473 ' 00:10:13.473 09:02:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:13.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.474 --rc genhtml_branch_coverage=1 00:10:13.474 --rc genhtml_function_coverage=1 00:10:13.474 --rc genhtml_legend=1 00:10:13.474 --rc geninfo_all_blocks=1 00:10:13.474 --rc geninfo_unexecuted_blocks=1 00:10:13.474 00:10:13.474 ' 00:10:13.474 09:02:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:13.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.474 --rc genhtml_branch_coverage=1 00:10:13.474 --rc genhtml_function_coverage=1 00:10:13.474 --rc genhtml_legend=1 00:10:13.474 --rc geninfo_all_blocks=1 00:10:13.474 --rc geninfo_unexecuted_blocks=1 00:10:13.474 00:10:13.474 ' 00:10:13.474 09:02:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:13.474 09:02:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:13.474 09:02:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:13.474 09:02:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:13.474 09:02:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:13.474 09:02:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:13.474 09:02:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:13.474 09:02:49 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:13.474 09:02:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:13.474 09:02:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57732 00:10:13.474 09:02:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57732 00:10:13.474 09:02:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:13.474 09:02:49 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57732 ']' 00:10:13.474 09:02:49 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.474 09:02:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:13.474 09:02:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.474 09:02:49 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:13.474 09:02:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:13.731 [2024-11-06 09:02:50.029450] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:10:13.731 [2024-11-06 09:02:50.029897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57732 ] 00:10:13.731 [2024-11-06 09:02:50.222163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:13.991 [2024-11-06 09:02:50.384095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.991 [2024-11-06 09:02:50.384115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.927 09:02:51 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:14.927 09:02:51 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:10:14.927 09:02:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57749 00:10:14.927 09:02:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:14.927 09:02:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:15.187 [ 00:10:15.187 "bdev_malloc_delete", 00:10:15.187 "bdev_malloc_create", 00:10:15.187 "bdev_null_resize", 00:10:15.187 "bdev_null_delete", 00:10:15.187 "bdev_null_create", 00:10:15.187 "bdev_nvme_cuse_unregister", 00:10:15.187 "bdev_nvme_cuse_register", 00:10:15.187 "bdev_opal_new_user", 00:10:15.187 "bdev_opal_set_lock_state", 00:10:15.187 "bdev_opal_delete", 00:10:15.187 "bdev_opal_get_info", 00:10:15.187 "bdev_opal_create", 00:10:15.187 "bdev_nvme_opal_revert", 00:10:15.187 "bdev_nvme_opal_init", 00:10:15.187 "bdev_nvme_send_cmd", 00:10:15.187 "bdev_nvme_set_keys", 00:10:15.187 "bdev_nvme_get_path_iostat", 00:10:15.187 "bdev_nvme_get_mdns_discovery_info", 00:10:15.187 "bdev_nvme_stop_mdns_discovery", 00:10:15.187 "bdev_nvme_start_mdns_discovery", 00:10:15.187 "bdev_nvme_set_multipath_policy", 00:10:15.187 "bdev_nvme_set_preferred_path", 00:10:15.187 "bdev_nvme_get_io_paths", 00:10:15.187 "bdev_nvme_remove_error_injection", 00:10:15.187 "bdev_nvme_add_error_injection", 00:10:15.187 "bdev_nvme_get_discovery_info", 00:10:15.187 "bdev_nvme_stop_discovery", 00:10:15.187 "bdev_nvme_start_discovery", 00:10:15.187 "bdev_nvme_get_controller_health_info", 00:10:15.187 "bdev_nvme_disable_controller", 00:10:15.187 "bdev_nvme_enable_controller", 00:10:15.187 "bdev_nvme_reset_controller", 00:10:15.187 "bdev_nvme_get_transport_statistics", 00:10:15.187 "bdev_nvme_apply_firmware", 00:10:15.187 "bdev_nvme_detach_controller", 00:10:15.187 "bdev_nvme_get_controllers", 00:10:15.187 "bdev_nvme_attach_controller", 00:10:15.187 "bdev_nvme_set_hotplug", 00:10:15.187 "bdev_nvme_set_options", 00:10:15.187 "bdev_passthru_delete", 00:10:15.187 "bdev_passthru_create", 00:10:15.187 "bdev_lvol_set_parent_bdev", 00:10:15.187 "bdev_lvol_set_parent", 00:10:15.187 "bdev_lvol_check_shallow_copy", 00:10:15.188 "bdev_lvol_start_shallow_copy", 00:10:15.188 "bdev_lvol_grow_lvstore", 00:10:15.188 "bdev_lvol_get_lvols", 00:10:15.188 "bdev_lvol_get_lvstores", 00:10:15.188 "bdev_lvol_delete", 00:10:15.188 "bdev_lvol_set_read_only", 00:10:15.188 "bdev_lvol_resize", 00:10:15.188 "bdev_lvol_decouple_parent", 00:10:15.188 "bdev_lvol_inflate", 00:10:15.188 "bdev_lvol_rename", 00:10:15.188 "bdev_lvol_clone_bdev", 00:10:15.188 "bdev_lvol_clone", 00:10:15.188 "bdev_lvol_snapshot", 00:10:15.188 "bdev_lvol_create", 00:10:15.188 "bdev_lvol_delete_lvstore", 00:10:15.188 "bdev_lvol_rename_lvstore", 00:10:15.188 "bdev_lvol_create_lvstore", 00:10:15.188 "bdev_raid_set_options", 00:10:15.188 "bdev_raid_remove_base_bdev", 00:10:15.188 "bdev_raid_add_base_bdev", 00:10:15.188 "bdev_raid_delete", 00:10:15.188 "bdev_raid_create", 00:10:15.188 "bdev_raid_get_bdevs", 00:10:15.188 "bdev_error_inject_error", 00:10:15.188 "bdev_error_delete", 00:10:15.188 "bdev_error_create", 00:10:15.188 "bdev_split_delete", 00:10:15.188 "bdev_split_create", 00:10:15.188 "bdev_delay_delete", 00:10:15.188 "bdev_delay_create", 00:10:15.188 "bdev_delay_update_latency", 00:10:15.188 "bdev_zone_block_delete", 00:10:15.188 "bdev_zone_block_create", 00:10:15.188 "blobfs_create", 00:10:15.188 "blobfs_detect", 00:10:15.188 "blobfs_set_cache_size", 00:10:15.188 "bdev_aio_delete", 00:10:15.188 "bdev_aio_rescan", 00:10:15.188 "bdev_aio_create", 00:10:15.188 "bdev_ftl_set_property", 00:10:15.188 "bdev_ftl_get_properties", 00:10:15.188 "bdev_ftl_get_stats", 00:10:15.188 "bdev_ftl_unmap", 00:10:15.188 "bdev_ftl_unload", 00:10:15.188 "bdev_ftl_delete", 00:10:15.188 "bdev_ftl_load", 00:10:15.188 "bdev_ftl_create", 00:10:15.188 "bdev_virtio_attach_controller", 00:10:15.188 "bdev_virtio_scsi_get_devices", 00:10:15.188 "bdev_virtio_detach_controller", 00:10:15.188 "bdev_virtio_blk_set_hotplug", 00:10:15.188 "bdev_iscsi_delete", 00:10:15.188 "bdev_iscsi_create", 00:10:15.188 "bdev_iscsi_set_options", 00:10:15.188 "accel_error_inject_error", 00:10:15.188 "ioat_scan_accel_module", 00:10:15.188 "dsa_scan_accel_module", 00:10:15.188 "iaa_scan_accel_module", 00:10:15.188 "keyring_file_remove_key", 00:10:15.188 "keyring_file_add_key", 00:10:15.188 "keyring_linux_set_options", 00:10:15.188 "fsdev_aio_delete", 00:10:15.188 "fsdev_aio_create", 00:10:15.188 "iscsi_get_histogram", 00:10:15.188 "iscsi_enable_histogram", 00:10:15.188 "iscsi_set_options", 00:10:15.188 "iscsi_get_auth_groups", 00:10:15.188 "iscsi_auth_group_remove_secret", 00:10:15.188 "iscsi_auth_group_add_secret", 00:10:15.188 "iscsi_delete_auth_group", 00:10:15.188 "iscsi_create_auth_group", 00:10:15.188 "iscsi_set_discovery_auth", 00:10:15.188 "iscsi_get_options", 00:10:15.188 "iscsi_target_node_request_logout", 00:10:15.188 "iscsi_target_node_set_redirect", 00:10:15.188 "iscsi_target_node_set_auth", 00:10:15.188 "iscsi_target_node_add_lun", 00:10:15.188 "iscsi_get_stats", 00:10:15.188 "iscsi_get_connections", 00:10:15.188 "iscsi_portal_group_set_auth", 00:10:15.188 "iscsi_start_portal_group", 00:10:15.188 "iscsi_delete_portal_group", 00:10:15.188 "iscsi_create_portal_group", 00:10:15.188 "iscsi_get_portal_groups", 00:10:15.188 "iscsi_delete_target_node", 00:10:15.188 "iscsi_target_node_remove_pg_ig_maps", 00:10:15.188 "iscsi_target_node_add_pg_ig_maps", 00:10:15.188 "iscsi_create_target_node", 00:10:15.188 "iscsi_get_target_nodes", 00:10:15.188 "iscsi_delete_initiator_group", 00:10:15.188 "iscsi_initiator_group_remove_initiators", 00:10:15.188 "iscsi_initiator_group_add_initiators", 00:10:15.188 "iscsi_create_initiator_group", 00:10:15.188 "iscsi_get_initiator_groups", 00:10:15.188 "nvmf_set_crdt", 00:10:15.188 "nvmf_set_config", 00:10:15.188 "nvmf_set_max_subsystems", 00:10:15.188 "nvmf_stop_mdns_prr", 00:10:15.188 "nvmf_publish_mdns_prr", 00:10:15.188 "nvmf_subsystem_get_listeners", 00:10:15.188 "nvmf_subsystem_get_qpairs", 00:10:15.188 "nvmf_subsystem_get_controllers", 00:10:15.188 "nvmf_get_stats", 00:10:15.188 "nvmf_get_transports", 00:10:15.188 "nvmf_create_transport", 00:10:15.188 "nvmf_get_targets", 00:10:15.188 "nvmf_delete_target", 00:10:15.188 "nvmf_create_target", 00:10:15.188 "nvmf_subsystem_allow_any_host", 00:10:15.188 "nvmf_subsystem_set_keys", 00:10:15.188 "nvmf_subsystem_remove_host", 00:10:15.188 "nvmf_subsystem_add_host", 00:10:15.188 "nvmf_ns_remove_host", 00:10:15.188 "nvmf_ns_add_host", 00:10:15.188 "nvmf_subsystem_remove_ns", 00:10:15.188 "nvmf_subsystem_set_ns_ana_group", 00:10:15.188 "nvmf_subsystem_add_ns", 00:10:15.188 "nvmf_subsystem_listener_set_ana_state", 00:10:15.188 "nvmf_discovery_get_referrals", 00:10:15.188 "nvmf_discovery_remove_referral", 00:10:15.188 "nvmf_discovery_add_referral", 00:10:15.188 "nvmf_subsystem_remove_listener", 00:10:15.188 "nvmf_subsystem_add_listener", 00:10:15.188 "nvmf_delete_subsystem", 00:10:15.188 "nvmf_create_subsystem", 00:10:15.188 "nvmf_get_subsystems", 00:10:15.188 "env_dpdk_get_mem_stats", 00:10:15.188 "nbd_get_disks", 00:10:15.188 "nbd_stop_disk", 00:10:15.188 "nbd_start_disk", 00:10:15.188 "ublk_recover_disk", 00:10:15.188 "ublk_get_disks", 00:10:15.188 "ublk_stop_disk", 00:10:15.188 "ublk_start_disk", 00:10:15.188 "ublk_destroy_target", 00:10:15.188 "ublk_create_target", 00:10:15.188 "virtio_blk_create_transport", 00:10:15.188 "virtio_blk_get_transports", 00:10:15.188 "vhost_controller_set_coalescing", 00:10:15.188 "vhost_get_controllers", 00:10:15.188 "vhost_delete_controller", 00:10:15.188 "vhost_create_blk_controller", 00:10:15.188 "vhost_scsi_controller_remove_target", 00:10:15.188 "vhost_scsi_controller_add_target", 00:10:15.188 "vhost_start_scsi_controller", 00:10:15.188 "vhost_create_scsi_controller", 00:10:15.188 "thread_set_cpumask", 00:10:15.188 "scheduler_set_options", 00:10:15.188 "framework_get_governor", 00:10:15.188 "framework_get_scheduler", 00:10:15.188 "framework_set_scheduler", 00:10:15.188 "framework_get_reactors", 00:10:15.188 "thread_get_io_channels", 00:10:15.188 "thread_get_pollers", 00:10:15.188 "thread_get_stats", 00:10:15.188 "framework_monitor_context_switch", 00:10:15.188 "spdk_kill_instance", 00:10:15.188 "log_enable_timestamps", 00:10:15.188 "log_get_flags", 00:10:15.188 "log_clear_flag", 00:10:15.188 "log_set_flag", 00:10:15.188 "log_get_level", 00:10:15.188 "log_set_level", 00:10:15.188 "log_get_print_level", 00:10:15.188 "log_set_print_level", 00:10:15.188 "framework_enable_cpumask_locks", 00:10:15.188 "framework_disable_cpumask_locks", 00:10:15.188 "framework_wait_init", 00:10:15.188 "framework_start_init", 00:10:15.188 "scsi_get_devices", 00:10:15.188 "bdev_get_histogram", 00:10:15.188 "bdev_enable_histogram", 00:10:15.188 "bdev_set_qos_limit", 00:10:15.188 "bdev_set_qd_sampling_period", 00:10:15.188 "bdev_get_bdevs", 00:10:15.188 "bdev_reset_iostat", 00:10:15.188 "bdev_get_iostat", 00:10:15.188 "bdev_examine", 00:10:15.188 "bdev_wait_for_examine", 00:10:15.188 "bdev_set_options", 00:10:15.188 "accel_get_stats", 00:10:15.188 "accel_set_options", 00:10:15.188 "accel_set_driver", 00:10:15.188 "accel_crypto_key_destroy", 00:10:15.188 "accel_crypto_keys_get", 00:10:15.188 "accel_crypto_key_create", 00:10:15.188 "accel_assign_opc", 00:10:15.188 "accel_get_module_info", 00:10:15.188 "accel_get_opc_assignments", 00:10:15.188 "vmd_rescan", 00:10:15.188 "vmd_remove_device", 00:10:15.188 "vmd_enable", 00:10:15.188 "sock_get_default_impl", 00:10:15.188 "sock_set_default_impl", 00:10:15.188 "sock_impl_set_options", 00:10:15.188 "sock_impl_get_options", 00:10:15.188 "iobuf_get_stats", 00:10:15.188 "iobuf_set_options", 00:10:15.188 "keyring_get_keys", 00:10:15.188 "framework_get_pci_devices", 00:10:15.188 "framework_get_config", 00:10:15.188 "framework_get_subsystems", 00:10:15.188 "fsdev_set_opts", 00:10:15.188 "fsdev_get_opts", 00:10:15.188 "trace_get_info", 00:10:15.188 "trace_get_tpoint_group_mask", 00:10:15.188 "trace_disable_tpoint_group", 00:10:15.188 "trace_enable_tpoint_group", 00:10:15.188 "trace_clear_tpoint_mask", 00:10:15.188 "trace_set_tpoint_mask", 00:10:15.188 "notify_get_notifications", 00:10:15.188 "notify_get_types", 00:10:15.188 "spdk_get_version", 00:10:15.188 "rpc_get_methods" 00:10:15.188 ] 00:10:15.188 09:02:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:15.188 09:02:51 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:15.188 09:02:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:15.188 09:02:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:15.188 09:02:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57732 00:10:15.188 09:02:51 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57732 ']' 00:10:15.188 09:02:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57732 00:10:15.188 09:02:51 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:10:15.188 09:02:51 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:15.188 09:02:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57732 00:10:15.188 killing process with pid 57732 00:10:15.188 09:02:51 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:15.188 09:02:51 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:15.188 09:02:51 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57732' 00:10:15.188 09:02:51 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57732 00:10:15.188 09:02:51 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57732 00:10:17.722 ************************************ 00:10:17.722 END TEST spdkcli_tcp 00:10:17.722 ************************************ 00:10:17.722 00:10:17.722 real 0m4.283s 00:10:17.722 user 0m7.755s 00:10:17.722 sys 0m0.684s 00:10:17.722 09:02:53 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:17.722 09:02:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:17.722 09:02:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:17.722 09:02:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:17.722 09:02:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:17.722 09:02:54 -- common/autotest_common.sh@10 -- # set +x 00:10:17.722 ************************************ 00:10:17.722 START TEST dpdk_mem_utility 00:10:17.722 ************************************ 00:10:17.722 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:17.722 * Looking for test storage... 00:10:17.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:17.722 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:17.722 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:10:17.722 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:17.722 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:17.722 09:02:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.722 09:02:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.722 09:02:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.722 09:02:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.722 09:02:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.722 09:02:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.722 09:02:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.722 09:02:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.722 09:02:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.722 09:02:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:10:17.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.723 09:02:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:10:17.723 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.723 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:17.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.723 --rc genhtml_branch_coverage=1 00:10:17.723 --rc genhtml_function_coverage=1 00:10:17.723 --rc genhtml_legend=1 00:10:17.723 --rc geninfo_all_blocks=1 00:10:17.723 --rc geninfo_unexecuted_blocks=1 00:10:17.723 00:10:17.723 ' 00:10:17.723 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:17.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.723 --rc genhtml_branch_coverage=1 00:10:17.723 --rc genhtml_function_coverage=1 00:10:17.723 --rc genhtml_legend=1 00:10:17.723 --rc geninfo_all_blocks=1 00:10:17.723 --rc geninfo_unexecuted_blocks=1 00:10:17.723 00:10:17.723 ' 00:10:17.723 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:17.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.723 --rc genhtml_branch_coverage=1 00:10:17.723 --rc genhtml_function_coverage=1 00:10:17.723 --rc genhtml_legend=1 00:10:17.723 --rc geninfo_all_blocks=1 00:10:17.723 --rc geninfo_unexecuted_blocks=1 00:10:17.723 00:10:17.723 ' 00:10:17.723 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:17.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.723 --rc genhtml_branch_coverage=1 00:10:17.723 --rc genhtml_function_coverage=1 00:10:17.723 --rc genhtml_legend=1 00:10:17.723 --rc geninfo_all_blocks=1 00:10:17.723 --rc geninfo_unexecuted_blocks=1 00:10:17.723 00:10:17.723 ' 00:10:17.723 09:02:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:17.723 09:02:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57854 00:10:17.723 09:02:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57854 00:10:17.723 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57854 ']' 00:10:17.723 09:02:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:17.723 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.723 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:17.723 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.723 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:17.723 09:02:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:17.982 [2024-11-06 09:02:54.356557] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:10:17.982 [2024-11-06 09:02:54.356980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57854 ] 00:10:18.241 [2024-11-06 09:02:54.543522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.241 [2024-11-06 09:02:54.701587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.177 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:19.178 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:10:19.178 09:02:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:19.178 09:02:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:19.178 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.178 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:19.178 { 00:10:19.178 "filename": "/tmp/spdk_mem_dump.txt" 00:10:19.178 } 00:10:19.178 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.178 09:02:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:19.178 DPDK memory size 816.000000 MiB in 1 heap(s) 00:10:19.178 1 heaps totaling size 816.000000 MiB 00:10:19.178 size: 816.000000 MiB heap id: 0 00:10:19.178 end heaps---------- 00:10:19.178 9 mempools totaling size 595.772034 MiB 00:10:19.178 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:19.178 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:19.178 size: 92.545471 MiB name: bdev_io_57854 00:10:19.178 size: 50.003479 MiB name: msgpool_57854 00:10:19.178 size: 36.509338 MiB name: fsdev_io_57854 00:10:19.178 size: 21.763794 MiB name: PDU_Pool 00:10:19.178 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:19.178 size: 4.133484 MiB name: evtpool_57854 00:10:19.178 size: 0.026123 MiB name: Session_Pool 00:10:19.178 end mempools------- 00:10:19.178 6 memzones totaling size 4.142822 MiB 00:10:19.178 size: 1.000366 MiB name: RG_ring_0_57854 00:10:19.178 size: 1.000366 MiB name: RG_ring_1_57854 00:10:19.178 size: 1.000366 MiB name: RG_ring_4_57854 00:10:19.178 size: 1.000366 MiB name: RG_ring_5_57854 00:10:19.178 size: 0.125366 MiB name: RG_ring_2_57854 00:10:19.178 size: 0.015991 MiB name: RG_ring_3_57854 00:10:19.178 end memzones------- 00:10:19.178 09:02:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:19.438 heap id: 0 total size: 816.000000 MiB number of busy elements: 319 number of free elements: 18 00:10:19.438 list of free elements. size: 16.790405 MiB 00:10:19.438 element at address: 0x200006400000 with size: 1.995972 MiB 00:10:19.438 element at address: 0x20000a600000 with size: 1.995972 MiB 00:10:19.438 element at address: 0x200003e00000 with size: 1.991028 MiB 00:10:19.438 element at address: 0x200018d00040 with size: 0.999939 MiB 00:10:19.438 element at address: 0x200019100040 with size: 0.999939 MiB 00:10:19.438 element at address: 0x200019200000 with size: 0.999084 MiB 00:10:19.438 element at address: 0x200031e00000 with size: 0.994324 MiB 00:10:19.438 element at address: 0x200000400000 with size: 0.992004 MiB 00:10:19.438 element at address: 0x200018a00000 with size: 0.959656 MiB 00:10:19.438 element at address: 0x200019500040 with size: 0.936401 MiB 00:10:19.438 element at address: 0x200000200000 with size: 0.716980 MiB 00:10:19.438 element at address: 0x20001ac00000 with size: 0.560730 MiB 00:10:19.438 element at address: 0x200000c00000 with size: 0.490173 MiB 00:10:19.438 element at address: 0x200018e00000 with size: 0.487976 MiB 00:10:19.438 element at address: 0x200019600000 with size: 0.485413 MiB 00:10:19.438 element at address: 0x200012c00000 with size: 0.443481 MiB 00:10:19.438 element at address: 0x200028000000 with size: 0.390442 MiB 00:10:19.438 element at address: 0x200000800000 with size: 0.350891 MiB 00:10:19.438 list of standard malloc elements. size: 199.288696 MiB 00:10:19.438 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:10:19.438 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:10:19.438 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:10:19.438 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:10:19.438 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:10:19.438 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:10:19.438 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:10:19.438 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:10:19.438 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:10:19.438 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:10:19.438 element at address: 0x200012bff040 with size: 0.000305 MiB 00:10:19.438 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:10:19.438 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:10:19.439 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200000cff000 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bff180 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bff280 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bff380 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bff480 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bff580 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bff680 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bff780 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bff880 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bff980 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012c71880 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012c71980 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012c72080 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012c72180 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:10:19.439 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:10:19.439 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:10:19.439 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:10:19.440 element at address: 0x200028063f40 with size: 0.000244 MiB 00:10:19.440 element at address: 0x200028064040 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806af80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806b080 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806b180 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806b280 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806b380 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806b480 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806b580 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806b680 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806b780 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806b880 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806b980 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806be80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806c080 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806c180 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806c280 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806c380 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806c480 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806c580 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806c680 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806c780 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806c880 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806c980 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806d080 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806d180 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806d280 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806d380 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806d480 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806d580 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806d680 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806d780 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806d880 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806d980 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806da80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806db80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806de80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806df80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806e080 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806e180 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806e280 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806e380 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806e480 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806e580 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806e680 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806e780 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806e880 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806e980 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806f080 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806f180 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806f280 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806f380 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806f480 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806f580 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806f680 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806f780 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806f880 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806f980 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:10:19.440 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:10:19.440 list of memzone associated elements. size: 599.920898 MiB 00:10:19.440 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:10:19.440 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:19.440 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:10:19.440 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:19.440 element at address: 0x200012df4740 with size: 92.045105 MiB 00:10:19.440 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57854_0 00:10:19.440 element at address: 0x200000dff340 with size: 48.003113 MiB 00:10:19.440 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57854_0 00:10:19.440 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:10:19.440 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57854_0 00:10:19.440 element at address: 0x2000197be900 with size: 20.255615 MiB 00:10:19.440 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:19.440 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:10:19.440 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:19.440 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:10:19.440 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57854_0 00:10:19.440 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:10:19.440 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57854 00:10:19.440 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:10:19.440 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57854 00:10:19.440 element at address: 0x200018efde00 with size: 1.008179 MiB 00:10:19.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:19.440 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:10:19.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:19.440 element at address: 0x200018afde00 with size: 1.008179 MiB 00:10:19.440 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:19.440 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:10:19.440 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:19.440 element at address: 0x200000cff100 with size: 1.000549 MiB 00:10:19.440 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57854 00:10:19.440 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:10:19.440 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57854 00:10:19.440 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:10:19.440 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57854 00:10:19.441 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:10:19.441 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57854 00:10:19.441 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:10:19.441 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57854 00:10:19.441 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:10:19.441 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57854 00:10:19.441 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:10:19.441 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:19.441 element at address: 0x200012c72280 with size: 0.500549 MiB 00:10:19.441 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:19.441 element at address: 0x20001967c440 with size: 0.250549 MiB 00:10:19.441 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:19.441 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:10:19.441 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57854 00:10:19.441 element at address: 0x20000085df80 with size: 0.125549 MiB 00:10:19.441 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57854 00:10:19.441 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:10:19.441 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:19.441 element at address: 0x200028064140 with size: 0.023804 MiB 00:10:19.441 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:19.441 element at address: 0x200000859d40 with size: 0.016174 MiB 00:10:19.441 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57854 00:10:19.441 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:10:19.441 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:19.441 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:10:19.441 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57854 00:10:19.441 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:10:19.441 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57854 00:10:19.441 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:10:19.441 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57854 00:10:19.441 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:10:19.441 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:19.441 09:02:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:19.441 09:02:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57854 00:10:19.441 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57854 ']' 00:10:19.441 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57854 00:10:19.441 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:10:19.441 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:19.441 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57854 00:10:19.441 killing process with pid 57854 00:10:19.441 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:19.441 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:19.441 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57854' 00:10:19.441 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57854 00:10:19.441 09:02:55 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57854 00:10:21.976 ************************************ 00:10:21.976 END TEST dpdk_mem_utility 00:10:21.976 ************************************ 00:10:21.976 00:10:21.976 real 0m4.040s 00:10:21.976 user 0m4.080s 00:10:21.976 sys 0m0.666s 00:10:21.976 09:02:58 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:21.976 09:02:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:21.976 09:02:58 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:21.976 09:02:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:21.976 09:02:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.976 09:02:58 -- common/autotest_common.sh@10 -- # set +x 00:10:21.976 ************************************ 00:10:21.976 START TEST event 00:10:21.976 ************************************ 00:10:21.976 09:02:58 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:21.976 * Looking for test storage... 00:10:21.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:21.976 09:02:58 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:21.976 09:02:58 event -- common/autotest_common.sh@1691 -- # lcov --version 00:10:21.976 09:02:58 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:21.976 09:02:58 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:21.976 09:02:58 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.976 09:02:58 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.976 09:02:58 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.976 09:02:58 event -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.976 09:02:58 event -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.976 09:02:58 event -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.976 09:02:58 event -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.976 09:02:58 event -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.976 09:02:58 event -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.976 09:02:58 event -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.976 09:02:58 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.976 09:02:58 event -- scripts/common.sh@344 -- # case "$op" in 00:10:21.976 09:02:58 event -- scripts/common.sh@345 -- # : 1 00:10:21.976 09:02:58 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.976 09:02:58 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.976 09:02:58 event -- scripts/common.sh@365 -- # decimal 1 00:10:21.976 09:02:58 event -- scripts/common.sh@353 -- # local d=1 00:10:21.976 09:02:58 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.976 09:02:58 event -- scripts/common.sh@355 -- # echo 1 00:10:21.976 09:02:58 event -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.976 09:02:58 event -- scripts/common.sh@366 -- # decimal 2 00:10:21.976 09:02:58 event -- scripts/common.sh@353 -- # local d=2 00:10:21.976 09:02:58 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.976 09:02:58 event -- scripts/common.sh@355 -- # echo 2 00:10:21.976 09:02:58 event -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.976 09:02:58 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.976 09:02:58 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.976 09:02:58 event -- scripts/common.sh@368 -- # return 0 00:10:21.976 09:02:58 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.976 09:02:58 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:21.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.976 --rc genhtml_branch_coverage=1 00:10:21.976 --rc genhtml_function_coverage=1 00:10:21.976 --rc genhtml_legend=1 00:10:21.976 --rc geninfo_all_blocks=1 00:10:21.976 --rc geninfo_unexecuted_blocks=1 00:10:21.976 00:10:21.976 ' 00:10:21.976 09:02:58 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:21.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.976 --rc genhtml_branch_coverage=1 00:10:21.976 --rc genhtml_function_coverage=1 00:10:21.976 --rc genhtml_legend=1 00:10:21.976 --rc geninfo_all_blocks=1 00:10:21.976 --rc geninfo_unexecuted_blocks=1 00:10:21.976 00:10:21.976 ' 00:10:21.976 09:02:58 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:21.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.976 --rc genhtml_branch_coverage=1 00:10:21.976 --rc genhtml_function_coverage=1 00:10:21.976 --rc genhtml_legend=1 00:10:21.976 --rc geninfo_all_blocks=1 00:10:21.976 --rc geninfo_unexecuted_blocks=1 00:10:21.976 00:10:21.976 ' 00:10:21.976 09:02:58 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:21.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.976 --rc genhtml_branch_coverage=1 00:10:21.976 --rc genhtml_function_coverage=1 00:10:21.976 --rc genhtml_legend=1 00:10:21.976 --rc geninfo_all_blocks=1 00:10:21.976 --rc geninfo_unexecuted_blocks=1 00:10:21.976 00:10:21.976 ' 00:10:21.976 09:02:58 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:21.976 09:02:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:21.976 09:02:58 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:21.976 09:02:58 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:10:21.976 09:02:58 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:21.976 09:02:58 event -- common/autotest_common.sh@10 -- # set +x 00:10:21.976 ************************************ 00:10:21.976 START TEST event_perf 00:10:21.976 ************************************ 00:10:21.976 09:02:58 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:21.976 Running I/O for 1 seconds...[2024-11-06 09:02:58.383723] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:10:21.976 [2024-11-06 09:02:58.384044] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57962 ] 00:10:22.235 [2024-11-06 09:02:58.580509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.235 [2024-11-06 09:02:58.759380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.235 [2024-11-06 09:02:58.759539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.235 [2024-11-06 09:02:58.759661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.235 [2024-11-06 09:02:58.759674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.627 Running I/O for 1 seconds... 00:10:23.627 lcore 0: 186168 00:10:23.627 lcore 1: 186167 00:10:23.627 lcore 2: 186167 00:10:23.627 lcore 3: 186169 00:10:23.627 done. 00:10:23.627 00:10:23.627 real 0m1.674s 00:10:23.627 user 0m4.403s 00:10:23.627 sys 0m0.141s 00:10:23.627 09:03:00 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.627 09:03:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:23.627 ************************************ 00:10:23.627 END TEST event_perf 00:10:23.627 ************************************ 00:10:23.627 09:03:00 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:23.627 09:03:00 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:23.627 09:03:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:23.627 09:03:00 event -- common/autotest_common.sh@10 -- # set +x 00:10:23.627 ************************************ 00:10:23.627 START TEST event_reactor 00:10:23.627 ************************************ 00:10:23.627 09:03:00 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:23.627 [2024-11-06 09:03:00.109259] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:10:23.627 [2024-11-06 09:03:00.109402] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58007 ] 00:10:23.887 [2024-11-06 09:03:00.284728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.887 [2024-11-06 09:03:00.418335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.264 test_start 00:10:25.264 oneshot 00:10:25.264 tick 100 00:10:25.264 tick 100 00:10:25.264 tick 250 00:10:25.264 tick 100 00:10:25.264 tick 100 00:10:25.264 tick 100 00:10:25.264 tick 250 00:10:25.264 tick 500 00:10:25.264 tick 100 00:10:25.264 tick 100 00:10:25.264 tick 250 00:10:25.264 tick 100 00:10:25.264 tick 100 00:10:25.264 test_end 00:10:25.264 ************************************ 00:10:25.264 END TEST event_reactor 00:10:25.264 ************************************ 00:10:25.264 00:10:25.264 real 0m1.593s 00:10:25.264 user 0m1.385s 00:10:25.264 sys 0m0.098s 00:10:25.264 09:03:01 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:25.264 09:03:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:25.264 09:03:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:25.264 09:03:01 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:25.264 09:03:01 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:25.264 09:03:01 event -- common/autotest_common.sh@10 -- # set +x 00:10:25.264 ************************************ 00:10:25.264 START TEST event_reactor_perf 00:10:25.264 ************************************ 00:10:25.264 09:03:01 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:25.264 [2024-11-06 09:03:01.754533] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:10:25.264 [2024-11-06 09:03:01.754710] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58044 ] 00:10:25.524 [2024-11-06 09:03:01.941141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.784 [2024-11-06 09:03:02.073574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.164 test_start 00:10:27.164 test_end 00:10:27.164 Performance: 277885 events per second 00:10:27.164 00:10:27.164 real 0m1.602s 00:10:27.164 user 0m1.389s 00:10:27.164 sys 0m0.102s 00:10:27.164 09:03:03 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:27.164 ************************************ 00:10:27.164 END TEST event_reactor_perf 00:10:27.164 09:03:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:27.164 ************************************ 00:10:27.164 09:03:03 event -- event/event.sh@49 -- # uname -s 00:10:27.164 09:03:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:27.164 09:03:03 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:27.164 09:03:03 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:27.164 09:03:03 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:27.164 09:03:03 event -- common/autotest_common.sh@10 -- # set +x 00:10:27.164 ************************************ 00:10:27.164 START TEST event_scheduler 00:10:27.164 ************************************ 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:27.164 * Looking for test storage... 00:10:27.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.164 09:03:03 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:27.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.164 --rc genhtml_branch_coverage=1 00:10:27.164 --rc genhtml_function_coverage=1 00:10:27.164 --rc genhtml_legend=1 00:10:27.164 --rc geninfo_all_blocks=1 00:10:27.164 --rc geninfo_unexecuted_blocks=1 00:10:27.164 00:10:27.164 ' 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:27.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.164 --rc genhtml_branch_coverage=1 00:10:27.164 --rc genhtml_function_coverage=1 00:10:27.164 --rc genhtml_legend=1 00:10:27.164 --rc geninfo_all_blocks=1 00:10:27.164 --rc geninfo_unexecuted_blocks=1 00:10:27.164 00:10:27.164 ' 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:27.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.164 --rc genhtml_branch_coverage=1 00:10:27.164 --rc genhtml_function_coverage=1 00:10:27.164 --rc genhtml_legend=1 00:10:27.164 --rc geninfo_all_blocks=1 00:10:27.164 --rc geninfo_unexecuted_blocks=1 00:10:27.164 00:10:27.164 ' 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:27.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.164 --rc genhtml_branch_coverage=1 00:10:27.164 --rc genhtml_function_coverage=1 00:10:27.164 --rc genhtml_legend=1 00:10:27.164 --rc geninfo_all_blocks=1 00:10:27.164 --rc geninfo_unexecuted_blocks=1 00:10:27.164 00:10:27.164 ' 00:10:27.164 09:03:03 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:27.164 09:03:03 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58114 00:10:27.164 09:03:03 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:27.164 09:03:03 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:27.164 09:03:03 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58114 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58114 ']' 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:27.164 09:03:03 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.165 09:03:03 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:27.165 09:03:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:27.165 [2024-11-06 09:03:03.673228] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:10:27.165 [2024-11-06 09:03:03.674226] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58114 ] 00:10:27.424 [2024-11-06 09:03:03.869330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.682 [2024-11-06 09:03:04.032591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.682 [2024-11-06 09:03:04.032740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.682 [2024-11-06 09:03:04.032796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.682 [2024-11-06 09:03:04.032799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.256 09:03:04 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:28.256 09:03:04 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:10:28.256 09:03:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:28.256 09:03:04 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.256 09:03:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:28.256 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:28.256 POWER: Cannot set governor of lcore 0 to userspace 00:10:28.256 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:28.256 POWER: Cannot set governor of lcore 0 to performance 00:10:28.256 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:28.256 POWER: Cannot set governor of lcore 0 to userspace 00:10:28.256 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:28.256 POWER: Cannot set governor of lcore 0 to userspace 00:10:28.256 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:28.256 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:28.256 POWER: Unable to set Power Management Environment for lcore 0 00:10:28.256 [2024-11-06 09:03:04.759555] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:10:28.256 [2024-11-06 09:03:04.759584] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:10:28.256 [2024-11-06 09:03:04.759599] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:10:28.256 [2024-11-06 09:03:04.759626] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:28.256 [2024-11-06 09:03:04.759638] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:28.256 [2024-11-06 09:03:04.759652] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:28.256 09:03:04 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.256 09:03:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:28.256 09:03:04 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.256 09:03:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 [2024-11-06 09:03:05.096283] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:28.837 09:03:05 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.837 09:03:05 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:28.837 09:03:05 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:28.837 09:03:05 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 ************************************ 00:10:28.837 START TEST scheduler_create_thread 00:10:28.837 ************************************ 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 2 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 3 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 4 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 5 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 6 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 7 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 8 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 9 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 10 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.837 09:03:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:29.774 09:03:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.774 09:03:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:29.774 09:03:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:29.774 09:03:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.774 09:03:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:31.158 ************************************ 00:10:31.158 END TEST scheduler_create_thread 00:10:31.158 ************************************ 00:10:31.158 09:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.158 00:10:31.158 real 0m2.141s 00:10:31.158 user 0m0.020s 00:10:31.158 sys 0m0.005s 00:10:31.158 09:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:31.158 09:03:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:31.158 09:03:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:31.158 09:03:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58114 00:10:31.158 09:03:07 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58114 ']' 00:10:31.158 09:03:07 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58114 00:10:31.158 09:03:07 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:10:31.158 09:03:07 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:31.158 09:03:07 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58114 00:10:31.158 killing process with pid 58114 00:10:31.158 09:03:07 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:10:31.158 09:03:07 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:10:31.158 09:03:07 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58114' 00:10:31.158 09:03:07 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58114 00:10:31.158 09:03:07 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58114 00:10:31.417 [2024-11-06 09:03:07.730043] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:32.353 00:10:32.353 real 0m5.442s 00:10:32.353 user 0m9.548s 00:10:32.353 sys 0m0.542s 00:10:32.353 09:03:08 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:32.353 ************************************ 00:10:32.353 END TEST event_scheduler 00:10:32.353 ************************************ 00:10:32.353 09:03:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:32.353 09:03:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:32.353 09:03:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:32.353 09:03:08 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:32.353 09:03:08 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:32.353 09:03:08 event -- common/autotest_common.sh@10 -- # set +x 00:10:32.353 ************************************ 00:10:32.353 START TEST app_repeat 00:10:32.353 ************************************ 00:10:32.353 09:03:08 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58226 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:32.353 Process app_repeat pid: 58226 00:10:32.353 spdk_app_start Round 0 00:10:32.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58226' 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:32.353 09:03:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58226 /var/tmp/spdk-nbd.sock 00:10:32.353 09:03:08 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58226 ']' 00:10:32.353 09:03:08 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:32.353 09:03:08 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:32.353 09:03:08 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:32.353 09:03:08 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:32.353 09:03:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:32.613 [2024-11-06 09:03:08.920377] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:10:32.613 [2024-11-06 09:03:08.920684] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58226 ] 00:10:32.613 [2024-11-06 09:03:09.096331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:32.872 [2024-11-06 09:03:09.232597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.872 [2024-11-06 09:03:09.232602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.808 09:03:09 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:33.808 09:03:09 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:33.808 09:03:09 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:33.808 Malloc0 00:10:33.808 09:03:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:34.381 Malloc1 00:10:34.381 09:03:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:34.381 09:03:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.381 09:03:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:34.381 09:03:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:34.381 09:03:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.381 09:03:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:34.381 09:03:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:34.381 09:03:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.381 09:03:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:34.381 09:03:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:34.381 09:03:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.381 09:03:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:34.382 09:03:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:34.382 09:03:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:34.382 09:03:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:34.382 09:03:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:34.640 /dev/nbd0 00:10:34.640 09:03:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:34.640 09:03:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:34.640 09:03:10 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:34.640 09:03:10 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:34.641 09:03:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:34.641 09:03:10 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:34.641 09:03:10 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:34.641 09:03:10 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:34.641 09:03:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:34.641 09:03:10 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:34.641 09:03:10 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:34.641 1+0 records in 00:10:34.641 1+0 records out 00:10:34.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348283 s, 11.8 MB/s 00:10:34.641 09:03:10 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:34.641 09:03:10 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:34.641 09:03:10 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:34.641 09:03:10 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:34.641 09:03:10 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:34.641 09:03:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.641 09:03:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:34.641 09:03:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:34.900 /dev/nbd1 00:10:34.900 09:03:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:34.900 09:03:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:34.900 1+0 records in 00:10:34.900 1+0 records out 00:10:34.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308265 s, 13.3 MB/s 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:34.900 09:03:11 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:34.900 09:03:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:34.900 09:03:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:34.900 09:03:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:34.900 09:03:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.900 09:03:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:35.159 09:03:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:35.159 { 00:10:35.159 "nbd_device": "/dev/nbd0", 00:10:35.159 "bdev_name": "Malloc0" 00:10:35.159 }, 00:10:35.159 { 00:10:35.159 "nbd_device": "/dev/nbd1", 00:10:35.159 "bdev_name": "Malloc1" 00:10:35.159 } 00:10:35.159 ]' 00:10:35.159 09:03:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:35.159 09:03:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:35.159 { 00:10:35.159 "nbd_device": "/dev/nbd0", 00:10:35.159 "bdev_name": "Malloc0" 00:10:35.159 }, 00:10:35.159 { 00:10:35.159 "nbd_device": "/dev/nbd1", 00:10:35.159 "bdev_name": "Malloc1" 00:10:35.159 } 00:10:35.159 ]' 00:10:35.159 09:03:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:35.159 /dev/nbd1' 00:10:35.159 09:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:35.159 /dev/nbd1' 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:35.160 256+0 records in 00:10:35.160 256+0 records out 00:10:35.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00908374 s, 115 MB/s 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:35.160 09:03:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:35.419 256+0 records in 00:10:35.419 256+0 records out 00:10:35.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257343 s, 40.7 MB/s 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:35.419 256+0 records in 00:10:35.419 256+0 records out 00:10:35.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289873 s, 36.2 MB/s 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.419 09:03:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:35.676 09:03:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:35.676 09:03:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:35.676 09:03:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:35.676 09:03:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.676 09:03:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.676 09:03:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:35.676 09:03:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:35.676 09:03:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.676 09:03:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.676 09:03:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:35.934 09:03:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:35.934 09:03:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:35.934 09:03:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:35.934 09:03:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.934 09:03:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.934 09:03:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:35.934 09:03:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:35.934 09:03:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.934 09:03:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:35.934 09:03:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.934 09:03:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:36.193 09:03:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:36.193 09:03:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:36.193 09:03:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:36.193 09:03:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:36.193 09:03:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:36.193 09:03:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:36.193 09:03:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:36.193 09:03:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:36.193 09:03:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:36.193 09:03:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:36.193 09:03:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:36.193 09:03:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:36.193 09:03:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:36.760 09:03:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:37.693 [2024-11-06 09:03:14.158437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:37.952 [2024-11-06 09:03:14.288304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.952 [2024-11-06 09:03:14.288321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.952 [2024-11-06 09:03:14.480914] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:37.952 [2024-11-06 09:03:14.481038] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:39.853 spdk_app_start Round 1 00:10:39.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:39.853 09:03:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:39.853 09:03:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:39.853 09:03:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58226 /var/tmp/spdk-nbd.sock 00:10:39.853 09:03:16 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58226 ']' 00:10:39.853 09:03:16 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:39.853 09:03:16 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:39.853 09:03:16 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:39.854 09:03:16 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:39.854 09:03:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:39.854 09:03:16 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:39.854 09:03:16 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:39.854 09:03:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:40.422 Malloc0 00:10:40.422 09:03:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:40.681 Malloc1 00:10:40.681 09:03:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:40.681 09:03:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:40.940 /dev/nbd0 00:10:40.940 09:03:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:40.940 09:03:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:40.940 1+0 records in 00:10:40.940 1+0 records out 00:10:40.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579076 s, 7.1 MB/s 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:40.940 09:03:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:40.940 09:03:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:40.940 09:03:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:40.940 09:03:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:41.199 /dev/nbd1 00:10:41.199 09:03:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:41.199 09:03:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:41.199 1+0 records in 00:10:41.199 1+0 records out 00:10:41.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004127 s, 9.9 MB/s 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:41.199 09:03:17 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:41.199 09:03:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:41.199 09:03:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:41.199 09:03:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:41.199 09:03:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.199 09:03:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:41.458 { 00:10:41.458 "nbd_device": "/dev/nbd0", 00:10:41.458 "bdev_name": "Malloc0" 00:10:41.458 }, 00:10:41.458 { 00:10:41.458 "nbd_device": "/dev/nbd1", 00:10:41.458 "bdev_name": "Malloc1" 00:10:41.458 } 00:10:41.458 ]' 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:41.458 { 00:10:41.458 "nbd_device": "/dev/nbd0", 00:10:41.458 "bdev_name": "Malloc0" 00:10:41.458 }, 00:10:41.458 { 00:10:41.458 "nbd_device": "/dev/nbd1", 00:10:41.458 "bdev_name": "Malloc1" 00:10:41.458 } 00:10:41.458 ]' 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:41.458 /dev/nbd1' 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:41.458 /dev/nbd1' 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:41.458 256+0 records in 00:10:41.458 256+0 records out 00:10:41.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00916677 s, 114 MB/s 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:41.458 256+0 records in 00:10:41.458 256+0 records out 00:10:41.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247299 s, 42.4 MB/s 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:41.458 09:03:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:41.717 256+0 records in 00:10:41.717 256+0 records out 00:10:41.717 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302191 s, 34.7 MB/s 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.717 09:03:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:41.975 09:03:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:41.975 09:03:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:41.975 09:03:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:41.975 09:03:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.975 09:03:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.975 09:03:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:41.975 09:03:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:41.975 09:03:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.975 09:03:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.975 09:03:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:42.234 09:03:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:42.234 09:03:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:42.234 09:03:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:42.234 09:03:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:42.234 09:03:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:42.234 09:03:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:42.234 09:03:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:42.234 09:03:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.234 09:03:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:42.234 09:03:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.234 09:03:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:42.492 09:03:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:42.492 09:03:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:42.492 09:03:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:42.492 09:03:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:42.492 09:03:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:42.492 09:03:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:42.492 09:03:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:42.492 09:03:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:42.492 09:03:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:42.492 09:03:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:42.492 09:03:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:42.492 09:03:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:42.492 09:03:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:43.059 09:03:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:43.994 [2024-11-06 09:03:20.403256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:44.252 [2024-11-06 09:03:20.528447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.252 [2024-11-06 09:03:20.528451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.252 [2024-11-06 09:03:20.716751] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:44.252 [2024-11-06 09:03:20.716871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:46.153 spdk_app_start Round 2 00:10:46.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:46.153 09:03:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:46.153 09:03:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:46.153 09:03:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58226 /var/tmp/spdk-nbd.sock 00:10:46.153 09:03:22 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58226 ']' 00:10:46.153 09:03:22 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:46.153 09:03:22 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:46.153 09:03:22 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:46.153 09:03:22 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:46.153 09:03:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:46.153 09:03:22 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:46.153 09:03:22 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:46.153 09:03:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:46.720 Malloc0 00:10:46.720 09:03:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:46.978 Malloc1 00:10:46.979 09:03:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:46.979 09:03:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:47.237 /dev/nbd0 00:10:47.237 09:03:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:47.237 09:03:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:47.237 1+0 records in 00:10:47.237 1+0 records out 00:10:47.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248128 s, 16.5 MB/s 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:47.237 09:03:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:47.237 09:03:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:47.237 09:03:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:47.237 09:03:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:47.497 /dev/nbd1 00:10:47.497 09:03:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:47.497 09:03:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:47.497 1+0 records in 00:10:47.497 1+0 records out 00:10:47.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244144 s, 16.8 MB/s 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:10:47.497 09:03:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:10:47.497 09:03:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:47.497 09:03:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:47.497 09:03:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:47.497 09:03:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:47.497 09:03:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:47.756 { 00:10:47.756 "nbd_device": "/dev/nbd0", 00:10:47.756 "bdev_name": "Malloc0" 00:10:47.756 }, 00:10:47.756 { 00:10:47.756 "nbd_device": "/dev/nbd1", 00:10:47.756 "bdev_name": "Malloc1" 00:10:47.756 } 00:10:47.756 ]' 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:47.756 { 00:10:47.756 "nbd_device": "/dev/nbd0", 00:10:47.756 "bdev_name": "Malloc0" 00:10:47.756 }, 00:10:47.756 { 00:10:47.756 "nbd_device": "/dev/nbd1", 00:10:47.756 "bdev_name": "Malloc1" 00:10:47.756 } 00:10:47.756 ]' 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:47.756 /dev/nbd1' 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:47.756 /dev/nbd1' 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:47.756 256+0 records in 00:10:47.756 256+0 records out 00:10:47.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00805318 s, 130 MB/s 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:47.756 09:03:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:48.015 256+0 records in 00:10:48.015 256+0 records out 00:10:48.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028068 s, 37.4 MB/s 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:48.015 256+0 records in 00:10:48.015 256+0 records out 00:10:48.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.038021 s, 27.6 MB/s 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.015 09:03:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:48.274 09:03:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:48.274 09:03:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:48.274 09:03:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:48.274 09:03:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.274 09:03:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.274 09:03:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:48.274 09:03:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:48.274 09:03:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.274 09:03:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.274 09:03:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:48.532 09:03:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:48.532 09:03:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:48.532 09:03:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:48.532 09:03:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.532 09:03:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.532 09:03:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:48.532 09:03:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:48.532 09:03:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.532 09:03:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:48.532 09:03:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:48.532 09:03:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:49.097 09:03:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:49.097 09:03:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:49.097 09:03:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:49.097 09:03:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:49.097 09:03:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:49.097 09:03:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:49.097 09:03:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:49.097 09:03:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:49.097 09:03:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:49.097 09:03:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:49.097 09:03:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:49.097 09:03:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:49.097 09:03:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:49.353 09:03:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:50.729 [2024-11-06 09:03:26.861891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:50.729 [2024-11-06 09:03:26.983880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.729 [2024-11-06 09:03:26.983889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.729 [2024-11-06 09:03:27.174523] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:50.729 [2024-11-06 09:03:27.174605] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:52.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:52.631 09:03:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58226 /var/tmp/spdk-nbd.sock 00:10:52.631 09:03:28 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58226 ']' 00:10:52.631 09:03:28 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:52.631 09:03:28 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:52.631 09:03:28 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:52.631 09:03:28 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:52.631 09:03:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:52.631 09:03:29 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:52.631 09:03:29 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:10:52.631 09:03:29 event.app_repeat -- event/event.sh@39 -- # killprocess 58226 00:10:52.631 09:03:29 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58226 ']' 00:10:52.631 09:03:29 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58226 00:10:52.631 09:03:29 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:10:52.631 09:03:29 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:52.631 09:03:29 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58226 00:10:52.631 killing process with pid 58226 00:10:52.631 09:03:29 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:52.631 09:03:29 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:52.631 09:03:29 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58226' 00:10:52.631 09:03:29 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58226 00:10:52.631 09:03:29 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58226 00:10:53.566 spdk_app_start is called in Round 0. 00:10:53.566 Shutdown signal received, stop current app iteration 00:10:53.566 Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 reinitialization... 00:10:53.566 spdk_app_start is called in Round 1. 00:10:53.566 Shutdown signal received, stop current app iteration 00:10:53.566 Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 reinitialization... 00:10:53.566 spdk_app_start is called in Round 2. 00:10:53.566 Shutdown signal received, stop current app iteration 00:10:53.566 Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 reinitialization... 00:10:53.566 spdk_app_start is called in Round 3. 00:10:53.566 Shutdown signal received, stop current app iteration 00:10:53.825 09:03:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:53.825 09:03:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:53.825 00:10:53.825 real 0m21.245s 00:10:53.825 user 0m46.829s 00:10:53.825 sys 0m3.044s 00:10:53.825 09:03:30 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:53.825 ************************************ 00:10:53.825 END TEST app_repeat 00:10:53.825 ************************************ 00:10:53.825 09:03:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:53.825 09:03:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:53.825 09:03:30 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:53.825 09:03:30 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:53.825 09:03:30 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:53.825 09:03:30 event -- common/autotest_common.sh@10 -- # set +x 00:10:53.825 ************************************ 00:10:53.825 START TEST cpu_locks 00:10:53.825 ************************************ 00:10:53.825 09:03:30 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:53.825 * Looking for test storage... 00:10:53.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:53.825 09:03:30 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:53.825 09:03:30 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:10:53.825 09:03:30 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:53.825 09:03:30 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.825 09:03:30 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:53.825 09:03:30 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.825 09:03:30 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:53.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.826 --rc genhtml_branch_coverage=1 00:10:53.826 --rc genhtml_function_coverage=1 00:10:53.826 --rc genhtml_legend=1 00:10:53.826 --rc geninfo_all_blocks=1 00:10:53.826 --rc geninfo_unexecuted_blocks=1 00:10:53.826 00:10:53.826 ' 00:10:53.826 09:03:30 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:53.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.826 --rc genhtml_branch_coverage=1 00:10:53.826 --rc genhtml_function_coverage=1 00:10:53.826 --rc genhtml_legend=1 00:10:53.826 --rc geninfo_all_blocks=1 00:10:53.826 --rc geninfo_unexecuted_blocks=1 00:10:53.826 00:10:53.826 ' 00:10:53.826 09:03:30 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:53.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.826 --rc genhtml_branch_coverage=1 00:10:53.826 --rc genhtml_function_coverage=1 00:10:53.826 --rc genhtml_legend=1 00:10:53.826 --rc geninfo_all_blocks=1 00:10:53.826 --rc geninfo_unexecuted_blocks=1 00:10:53.826 00:10:53.826 ' 00:10:53.826 09:03:30 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:53.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.826 --rc genhtml_branch_coverage=1 00:10:53.826 --rc genhtml_function_coverage=1 00:10:53.826 --rc genhtml_legend=1 00:10:53.826 --rc geninfo_all_blocks=1 00:10:53.826 --rc geninfo_unexecuted_blocks=1 00:10:53.826 00:10:53.826 ' 00:10:53.826 09:03:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:53.826 09:03:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:53.826 09:03:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:53.826 09:03:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:53.826 09:03:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:53.826 09:03:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:53.826 09:03:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:54.084 ************************************ 00:10:54.084 START TEST default_locks 00:10:54.084 ************************************ 00:10:54.084 09:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:10:54.084 09:03:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58695 00:10:54.084 09:03:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:54.084 09:03:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58695 00:10:54.084 09:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58695 ']' 00:10:54.084 09:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.084 09:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:54.085 09:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.085 09:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:54.085 09:03:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:54.085 [2024-11-06 09:03:30.495623] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:10:54.085 [2024-11-06 09:03:30.496148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58695 ] 00:10:54.343 [2024-11-06 09:03:30.684281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.343 [2024-11-06 09:03:30.809293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.296 09:03:31 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:55.296 09:03:31 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:10:55.296 09:03:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58695 00:10:55.296 09:03:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:55.296 09:03:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58695 00:10:55.554 09:03:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58695 00:10:55.554 09:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58695 ']' 00:10:55.554 09:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58695 00:10:55.554 09:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:10:55.554 09:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:55.554 09:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58695 00:10:55.812 killing process with pid 58695 00:10:55.812 09:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:55.812 09:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:55.812 09:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58695' 00:10:55.812 09:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58695 00:10:55.812 09:03:32 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58695 00:10:58.344 09:03:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58695 00:10:58.344 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:10:58.344 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58695 00:10:58.344 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:58.344 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:58.344 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58695 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58695 ']' 00:10:58.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:58.345 ERROR: process (pid: 58695) is no longer running 00:10:58.345 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58695) - No such process 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:58.345 00:10:58.345 real 0m3.929s 00:10:58.345 user 0m3.998s 00:10:58.345 sys 0m0.719s 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:58.345 09:03:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:58.345 ************************************ 00:10:58.345 END TEST default_locks 00:10:58.345 ************************************ 00:10:58.345 09:03:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:58.345 09:03:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:58.345 09:03:34 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:58.345 09:03:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:58.345 ************************************ 00:10:58.345 START TEST default_locks_via_rpc 00:10:58.345 ************************************ 00:10:58.345 09:03:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:10:58.345 09:03:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58770 00:10:58.345 09:03:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58770 00:10:58.345 09:03:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58770 ']' 00:10:58.345 09:03:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:58.345 09:03:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.345 09:03:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:58.345 09:03:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.345 09:03:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:58.345 09:03:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.345 [2024-11-06 09:03:34.472172] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:10:58.345 [2024-11-06 09:03:34.473146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58770 ] 00:10:58.345 [2024-11-06 09:03:34.669529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.345 [2024-11-06 09:03:34.831182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58770 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58770 00:10:59.281 09:03:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:59.849 09:03:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58770 00:10:59.849 09:03:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58770 ']' 00:10:59.849 09:03:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58770 00:10:59.849 09:03:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:10:59.849 09:03:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:59.849 09:03:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58770 00:10:59.849 killing process with pid 58770 00:10:59.849 09:03:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:59.849 09:03:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:59.849 09:03:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58770' 00:10:59.849 09:03:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58770 00:10:59.849 09:03:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58770 00:11:02.381 ************************************ 00:11:02.381 END TEST default_locks_via_rpc 00:11:02.381 ************************************ 00:11:02.381 00:11:02.381 real 0m4.075s 00:11:02.381 user 0m4.110s 00:11:02.381 sys 0m0.748s 00:11:02.381 09:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:02.381 09:03:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.381 09:03:38 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:11:02.381 09:03:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:02.381 09:03:38 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:02.381 09:03:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:02.381 ************************************ 00:11:02.381 START TEST non_locking_app_on_locked_coremask 00:11:02.381 ************************************ 00:11:02.381 09:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:11:02.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.381 09:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58844 00:11:02.381 09:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:02.381 09:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58844 /var/tmp/spdk.sock 00:11:02.381 09:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58844 ']' 00:11:02.381 09:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.381 09:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:02.381 09:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.381 09:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:02.381 09:03:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:02.381 [2024-11-06 09:03:38.596518] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:02.381 [2024-11-06 09:03:38.597503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58844 ] 00:11:02.381 [2024-11-06 09:03:38.779063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.381 [2024-11-06 09:03:38.909079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.316 09:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:03.316 09:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:03.316 09:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58862 00:11:03.316 09:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58862 /var/tmp/spdk2.sock 00:11:03.316 09:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:11:03.316 09:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58862 ']' 00:11:03.316 09:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:03.316 09:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:03.316 09:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:03.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:03.316 09:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:03.316 09:03:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:03.574 [2024-11-06 09:03:39.894907] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:03.574 [2024-11-06 09:03:39.895082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58862 ] 00:11:03.574 [2024-11-06 09:03:40.100267] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:03.574 [2024-11-06 09:03:40.100354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.139 [2024-11-06 09:03:40.376321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.671 09:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:06.671 09:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:06.671 09:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58844 00:11:06.671 09:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58844 00:11:06.671 09:03:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:07.238 09:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58844 00:11:07.238 09:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58844 ']' 00:11:07.238 09:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58844 00:11:07.238 09:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:07.238 09:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:07.238 09:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58844 00:11:07.238 09:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:07.238 09:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:07.238 killing process with pid 58844 00:11:07.238 09:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58844' 00:11:07.238 09:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58844 00:11:07.238 09:03:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58844 00:11:11.454 09:03:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58862 00:11:11.454 09:03:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58862 ']' 00:11:11.454 09:03:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58862 00:11:11.454 09:03:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:11.454 09:03:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:11.454 09:03:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58862 00:11:11.454 09:03:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:11.454 09:03:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:11.454 09:03:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58862' 00:11:11.454 killing process with pid 58862 00:11:11.454 09:03:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58862 00:11:11.454 09:03:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58862 00:11:14.008 00:11:14.008 real 0m11.608s 00:11:14.008 user 0m12.136s 00:11:14.008 sys 0m1.531s 00:11:14.008 09:03:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.008 09:03:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:14.008 ************************************ 00:11:14.008 END TEST non_locking_app_on_locked_coremask 00:11:14.008 ************************************ 00:11:14.008 09:03:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:11:14.008 09:03:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:14.008 09:03:50 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.008 09:03:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:14.008 ************************************ 00:11:14.008 START TEST locking_app_on_unlocked_coremask 00:11:14.008 ************************************ 00:11:14.008 09:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:11:14.008 09:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59014 00:11:14.008 09:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59014 /var/tmp/spdk.sock 00:11:14.008 09:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:11:14.008 09:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59014 ']' 00:11:14.008 09:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.008 09:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:14.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.008 09:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.008 09:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:14.008 09:03:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:14.008 [2024-11-06 09:03:50.271099] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:14.008 [2024-11-06 09:03:50.271317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59014 ] 00:11:14.008 [2024-11-06 09:03:50.458599] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:14.008 [2024-11-06 09:03:50.458678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.267 [2024-11-06 09:03:50.590948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.203 09:03:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:15.203 09:03:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:15.203 09:03:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59031 00:11:15.203 09:03:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59031 /var/tmp/spdk2.sock 00:11:15.203 09:03:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59031 ']' 00:11:15.203 09:03:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:15.203 09:03:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:15.203 09:03:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:15.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:15.203 09:03:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:15.203 09:03:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:15.203 09:03:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:15.203 [2024-11-06 09:03:51.613391] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:15.203 [2024-11-06 09:03:51.613592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59031 ] 00:11:15.462 [2024-11-06 09:03:51.813029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.720 [2024-11-06 09:03:52.084526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.284 09:03:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:18.284 09:03:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:18.285 09:03:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59031 00:11:18.285 09:03:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59031 00:11:18.285 09:03:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:18.543 09:03:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59014 00:11:18.543 09:03:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59014 ']' 00:11:18.543 09:03:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59014 00:11:18.543 09:03:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:18.543 09:03:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:18.543 09:03:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59014 00:11:18.543 killing process with pid 59014 00:11:18.543 09:03:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:18.543 09:03:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:18.543 09:03:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59014' 00:11:18.543 09:03:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59014 00:11:18.543 09:03:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59014 00:11:23.829 09:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59031 00:11:23.829 09:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59031 ']' 00:11:23.829 09:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59031 00:11:23.829 09:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:23.829 09:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:23.829 09:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59031 00:11:23.829 killing process with pid 59031 00:11:23.829 09:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:23.829 09:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:23.829 09:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59031' 00:11:23.829 09:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59031 00:11:23.829 09:03:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59031 00:11:25.728 00:11:25.728 real 0m11.684s 00:11:25.728 user 0m12.248s 00:11:25.728 sys 0m1.443s 00:11:25.728 09:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:25.728 09:04:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:25.728 ************************************ 00:11:25.728 END TEST locking_app_on_unlocked_coremask 00:11:25.728 ************************************ 00:11:25.728 09:04:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:11:25.728 09:04:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:25.728 09:04:01 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:25.728 09:04:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:25.728 ************************************ 00:11:25.728 START TEST locking_app_on_locked_coremask 00:11:25.728 ************************************ 00:11:25.728 09:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:11:25.728 09:04:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59179 00:11:25.728 09:04:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59179 /var/tmp/spdk.sock 00:11:25.728 09:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59179 ']' 00:11:25.728 09:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.728 09:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:25.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.728 09:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.728 09:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:25.728 09:04:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:25.728 09:04:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:25.728 [2024-11-06 09:04:02.002481] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:25.728 [2024-11-06 09:04:02.002677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59179 ] 00:11:25.728 [2024-11-06 09:04:02.192649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.987 [2024-11-06 09:04:02.356167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59201 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59201 /var/tmp/spdk2.sock 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59201 /var/tmp/spdk2.sock 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59201 /var/tmp/spdk2.sock 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59201 ']' 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:26.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:26.945 09:04:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:26.945 [2024-11-06 09:04:03.368658] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:26.945 [2024-11-06 09:04:03.368853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59201 ] 00:11:27.203 [2024-11-06 09:04:03.571854] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59179 has claimed it. 00:11:27.203 [2024-11-06 09:04:03.571969] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:27.770 ERROR: process (pid: 59201) is no longer running 00:11:27.770 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59201) - No such process 00:11:27.770 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:27.770 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:11:27.770 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:27.770 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:27.770 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:27.770 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:27.770 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59179 00:11:27.770 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59179 00:11:27.770 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:11:28.028 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59179 00:11:28.028 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59179 ']' 00:11:28.028 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59179 00:11:28.028 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:11:28.028 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:28.028 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59179 00:11:28.286 killing process with pid 59179 00:11:28.286 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:28.286 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:28.286 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59179' 00:11:28.286 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59179 00:11:28.286 09:04:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59179 00:11:30.812 00:11:30.812 real 0m5.100s 00:11:30.812 user 0m5.560s 00:11:30.812 sys 0m0.918s 00:11:30.812 09:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.812 09:04:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:30.812 ************************************ 00:11:30.812 END TEST locking_app_on_locked_coremask 00:11:30.812 ************************************ 00:11:30.812 09:04:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:11:30.812 09:04:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:30.812 09:04:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:30.812 09:04:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:30.812 ************************************ 00:11:30.812 START TEST locking_overlapped_coremask 00:11:30.812 ************************************ 00:11:30.812 09:04:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:11:30.812 09:04:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59276 00:11:30.812 09:04:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:11:30.812 09:04:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59276 /var/tmp/spdk.sock 00:11:30.812 09:04:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59276 ']' 00:11:30.812 09:04:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.812 09:04:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:30.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.812 09:04:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.812 09:04:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:30.812 09:04:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:30.812 [2024-11-06 09:04:07.156466] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:30.812 [2024-11-06 09:04:07.156652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59276 ] 00:11:31.070 [2024-11-06 09:04:07.347581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:31.070 [2024-11-06 09:04:07.524463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.070 [2024-11-06 09:04:07.524594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.070 [2024-11-06 09:04:07.524604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59294 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59294 /var/tmp/spdk2.sock 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59294 /var/tmp/spdk2.sock 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59294 /var/tmp/spdk2.sock 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59294 ']' 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:32.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:32.443 09:04:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:32.443 [2024-11-06 09:04:08.739505] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:32.443 [2024-11-06 09:04:08.739718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59294 ] 00:11:32.443 [2024-11-06 09:04:08.948662] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59276 has claimed it. 00:11:32.443 [2024-11-06 09:04:08.948780] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:33.010 ERROR: process (pid: 59294) is no longer running 00:11:33.010 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59294) - No such process 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59276 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59276 ']' 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59276 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59276 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:33.010 killing process with pid 59276 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59276' 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59276 00:11:33.010 09:04:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59276 00:11:35.545 00:11:35.545 real 0m4.961s 00:11:35.545 user 0m13.575s 00:11:35.545 sys 0m0.730s 00:11:35.545 09:04:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:35.545 ************************************ 00:11:35.545 END TEST locking_overlapped_coremask 00:11:35.545 ************************************ 00:11:35.545 09:04:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:35.545 09:04:12 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:35.545 09:04:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:35.545 09:04:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:35.545 09:04:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:35.545 ************************************ 00:11:35.545 START TEST locking_overlapped_coremask_via_rpc 00:11:35.545 ************************************ 00:11:35.545 09:04:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:11:35.545 09:04:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59363 00:11:35.545 09:04:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:35.545 09:04:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59363 /var/tmp/spdk.sock 00:11:35.545 09:04:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59363 ']' 00:11:35.545 09:04:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.545 09:04:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:35.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.545 09:04:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.545 09:04:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:35.545 09:04:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.803 [2024-11-06 09:04:12.149674] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:35.803 [2024-11-06 09:04:12.149876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59363 ] 00:11:36.060 [2024-11-06 09:04:12.337807] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:36.060 [2024-11-06 09:04:12.337891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:36.060 [2024-11-06 09:04:12.520256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.060 [2024-11-06 09:04:12.520407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.060 [2024-11-06 09:04:12.520428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.996 09:04:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:36.996 09:04:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:36.996 09:04:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59387 00:11:36.996 09:04:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:36.996 09:04:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59387 /var/tmp/spdk2.sock 00:11:36.996 09:04:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59387 ']' 00:11:36.996 09:04:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:36.996 09:04:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:36.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:36.996 09:04:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:36.996 09:04:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:36.996 09:04:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.996 [2024-11-06 09:04:13.507270] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:36.996 [2024-11-06 09:04:13.507424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59387 ] 00:11:37.255 [2024-11-06 09:04:13.699538] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:37.255 [2024-11-06 09:04:13.699639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:37.513 [2024-11-06 09:04:13.976596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.513 [2024-11-06 09:04:13.979913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.513 [2024-11-06 09:04:13.979942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.050 [2024-11-06 09:04:16.271036] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59363 has claimed it. 00:11:40.050 request: 00:11:40.050 { 00:11:40.050 "method": "framework_enable_cpumask_locks", 00:11:40.050 "req_id": 1 00:11:40.050 } 00:11:40.050 Got JSON-RPC error response 00:11:40.050 response: 00:11:40.050 { 00:11:40.050 "code": -32603, 00:11:40.050 "message": "Failed to claim CPU core: 2" 00:11:40.050 } 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59363 /var/tmp/spdk.sock 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59363 ']' 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:40.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59387 /var/tmp/spdk2.sock 00:11:40.050 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59387 ']' 00:11:40.051 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:40.051 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:40.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:40.051 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:40.051 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:40.051 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.309 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:40.309 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:11:40.309 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:40.309 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:40.309 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:40.309 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:40.309 00:11:40.309 real 0m4.820s 00:11:40.309 user 0m1.795s 00:11:40.309 sys 0m0.254s 00:11:40.309 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:40.309 ************************************ 00:11:40.309 END TEST locking_overlapped_coremask_via_rpc 00:11:40.309 ************************************ 00:11:40.309 09:04:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:40.567 09:04:16 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:40.567 09:04:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59363 ]] 00:11:40.567 09:04:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59363 00:11:40.567 09:04:16 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59363 ']' 00:11:40.567 09:04:16 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59363 00:11:40.567 09:04:16 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:11:40.567 09:04:16 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:40.567 09:04:16 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59363 00:11:40.567 09:04:16 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:40.567 killing process with pid 59363 00:11:40.567 09:04:16 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:40.567 09:04:16 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59363' 00:11:40.567 09:04:16 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59363 00:11:40.567 09:04:16 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59363 00:11:43.096 09:04:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59387 ]] 00:11:43.096 09:04:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59387 00:11:43.096 09:04:19 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59387 ']' 00:11:43.096 09:04:19 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59387 00:11:43.096 09:04:19 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:11:43.096 09:04:19 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:43.096 09:04:19 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59387 00:11:43.096 09:04:19 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:11:43.096 09:04:19 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:11:43.096 killing process with pid 59387 00:11:43.096 09:04:19 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59387' 00:11:43.096 09:04:19 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59387 00:11:43.096 09:04:19 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59387 00:11:44.998 09:04:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:44.998 09:04:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:44.998 09:04:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59363 ]] 00:11:44.998 09:04:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59363 00:11:44.998 09:04:21 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59363 ']' 00:11:44.998 09:04:21 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59363 00:11:44.998 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59363) - No such process 00:11:44.998 Process with pid 59363 is not found 00:11:44.998 09:04:21 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59363 is not found' 00:11:44.998 09:04:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59387 ]] 00:11:44.998 09:04:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59387 00:11:44.998 09:04:21 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59387 ']' 00:11:44.998 Process with pid 59387 is not found 00:11:44.998 09:04:21 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59387 00:11:44.998 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59387) - No such process 00:11:44.998 09:04:21 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59387 is not found' 00:11:44.998 09:04:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:44.998 00:11:44.998 real 0m51.292s 00:11:44.998 user 1m29.344s 00:11:44.998 sys 0m7.573s 00:11:44.998 ************************************ 00:11:44.998 END TEST cpu_locks 00:11:44.998 ************************************ 00:11:44.998 09:04:21 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.998 09:04:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:44.998 00:11:44.998 real 1m23.378s 00:11:44.998 user 2m33.109s 00:11:44.998 sys 0m11.794s 00:11:44.998 09:04:21 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:44.998 09:04:21 event -- common/autotest_common.sh@10 -- # set +x 00:11:44.998 ************************************ 00:11:44.998 END TEST event 00:11:44.998 ************************************ 00:11:45.256 09:04:21 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:45.256 09:04:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:45.256 09:04:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:45.256 09:04:21 -- common/autotest_common.sh@10 -- # set +x 00:11:45.256 ************************************ 00:11:45.256 START TEST thread 00:11:45.256 ************************************ 00:11:45.256 09:04:21 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:45.256 * Looking for test storage... 00:11:45.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:45.256 09:04:21 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:45.256 09:04:21 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:11:45.256 09:04:21 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:45.256 09:04:21 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:45.256 09:04:21 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.257 09:04:21 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.257 09:04:21 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.257 09:04:21 thread -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.257 09:04:21 thread -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.257 09:04:21 thread -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.257 09:04:21 thread -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.257 09:04:21 thread -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.257 09:04:21 thread -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.257 09:04:21 thread -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.257 09:04:21 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.257 09:04:21 thread -- scripts/common.sh@344 -- # case "$op" in 00:11:45.257 09:04:21 thread -- scripts/common.sh@345 -- # : 1 00:11:45.257 09:04:21 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.257 09:04:21 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.257 09:04:21 thread -- scripts/common.sh@365 -- # decimal 1 00:11:45.257 09:04:21 thread -- scripts/common.sh@353 -- # local d=1 00:11:45.257 09:04:21 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.257 09:04:21 thread -- scripts/common.sh@355 -- # echo 1 00:11:45.257 09:04:21 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.257 09:04:21 thread -- scripts/common.sh@366 -- # decimal 2 00:11:45.257 09:04:21 thread -- scripts/common.sh@353 -- # local d=2 00:11:45.257 09:04:21 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.257 09:04:21 thread -- scripts/common.sh@355 -- # echo 2 00:11:45.257 09:04:21 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.257 09:04:21 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.257 09:04:21 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.257 09:04:21 thread -- scripts/common.sh@368 -- # return 0 00:11:45.257 09:04:21 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.257 09:04:21 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:45.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.257 --rc genhtml_branch_coverage=1 00:11:45.257 --rc genhtml_function_coverage=1 00:11:45.257 --rc genhtml_legend=1 00:11:45.257 --rc geninfo_all_blocks=1 00:11:45.257 --rc geninfo_unexecuted_blocks=1 00:11:45.257 00:11:45.257 ' 00:11:45.257 09:04:21 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:45.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.257 --rc genhtml_branch_coverage=1 00:11:45.257 --rc genhtml_function_coverage=1 00:11:45.257 --rc genhtml_legend=1 00:11:45.257 --rc geninfo_all_blocks=1 00:11:45.257 --rc geninfo_unexecuted_blocks=1 00:11:45.257 00:11:45.257 ' 00:11:45.257 09:04:21 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:45.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.257 --rc genhtml_branch_coverage=1 00:11:45.257 --rc genhtml_function_coverage=1 00:11:45.257 --rc genhtml_legend=1 00:11:45.257 --rc geninfo_all_blocks=1 00:11:45.257 --rc geninfo_unexecuted_blocks=1 00:11:45.257 00:11:45.257 ' 00:11:45.257 09:04:21 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:45.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.257 --rc genhtml_branch_coverage=1 00:11:45.257 --rc genhtml_function_coverage=1 00:11:45.257 --rc genhtml_legend=1 00:11:45.257 --rc geninfo_all_blocks=1 00:11:45.257 --rc geninfo_unexecuted_blocks=1 00:11:45.257 00:11:45.257 ' 00:11:45.257 09:04:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:45.257 09:04:21 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:11:45.257 09:04:21 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:45.257 09:04:21 thread -- common/autotest_common.sh@10 -- # set +x 00:11:45.257 ************************************ 00:11:45.257 START TEST thread_poller_perf 00:11:45.257 ************************************ 00:11:45.257 09:04:21 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:45.516 [2024-11-06 09:04:21.815328] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:45.516 [2024-11-06 09:04:21.815508] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59582 ] 00:11:45.516 [2024-11-06 09:04:22.005523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.774 [2024-11-06 09:04:22.162118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.774 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:47.175 [2024-11-06T09:04:23.710Z] ====================================== 00:11:47.175 [2024-11-06T09:04:23.710Z] busy:2217897824 (cyc) 00:11:47.175 [2024-11-06T09:04:23.710Z] total_run_count: 312000 00:11:47.175 [2024-11-06T09:04:23.710Z] tsc_hz: 2200000000 (cyc) 00:11:47.175 [2024-11-06T09:04:23.710Z] ====================================== 00:11:47.175 [2024-11-06T09:04:23.710Z] poller_cost: 7108 (cyc), 3230 (nsec) 00:11:47.175 00:11:47.175 real 0m1.639s 00:11:47.175 user 0m1.410s 00:11:47.175 sys 0m0.118s 00:11:47.175 09:04:23 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:47.175 ************************************ 00:11:47.175 END TEST thread_poller_perf 00:11:47.175 ************************************ 00:11:47.175 09:04:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 09:04:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:47.175 09:04:23 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:11:47.175 09:04:23 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:47.175 09:04:23 thread -- common/autotest_common.sh@10 -- # set +x 00:11:47.175 ************************************ 00:11:47.175 START TEST thread_poller_perf 00:11:47.175 ************************************ 00:11:47.175 09:04:23 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:47.175 [2024-11-06 09:04:23.502378] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:47.175 [2024-11-06 09:04:23.502546] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59619 ] 00:11:47.175 [2024-11-06 09:04:23.687536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.434 [2024-11-06 09:04:23.815072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.434 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:48.810 [2024-11-06T09:04:25.345Z] ====================================== 00:11:48.810 [2024-11-06T09:04:25.345Z] busy:2204397822 (cyc) 00:11:48.810 [2024-11-06T09:04:25.345Z] total_run_count: 3801000 00:11:48.810 [2024-11-06T09:04:25.345Z] tsc_hz: 2200000000 (cyc) 00:11:48.810 [2024-11-06T09:04:25.345Z] ====================================== 00:11:48.810 [2024-11-06T09:04:25.345Z] poller_cost: 579 (cyc), 263 (nsec) 00:11:48.810 00:11:48.810 real 0m1.600s 00:11:48.810 user 0m1.368s 00:11:48.810 sys 0m0.121s 00:11:48.811 09:04:25 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:48.811 09:04:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:48.811 ************************************ 00:11:48.811 END TEST thread_poller_perf 00:11:48.811 ************************************ 00:11:48.811 09:04:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:11:48.811 00:11:48.811 real 0m3.560s 00:11:48.811 user 0m2.944s 00:11:48.811 sys 0m0.389s 00:11:48.811 09:04:25 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:48.811 09:04:25 thread -- common/autotest_common.sh@10 -- # set +x 00:11:48.811 ************************************ 00:11:48.811 END TEST thread 00:11:48.811 ************************************ 00:11:48.811 09:04:25 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:11:48.811 09:04:25 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:48.811 09:04:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:48.811 09:04:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:48.811 09:04:25 -- common/autotest_common.sh@10 -- # set +x 00:11:48.811 ************************************ 00:11:48.811 START TEST app_cmdline 00:11:48.811 ************************************ 00:11:48.811 09:04:25 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:48.811 * Looking for test storage... 00:11:48.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:48.811 09:04:25 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:48.811 09:04:25 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:11:48.811 09:04:25 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:48.811 09:04:25 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@345 -- # : 1 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:11:48.811 09:04:25 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.070 09:04:25 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:11:49.070 09:04:25 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:11:49.070 09:04:25 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.070 09:04:25 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:11:49.070 09:04:25 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.070 09:04:25 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.070 09:04:25 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.070 09:04:25 app_cmdline -- scripts/common.sh@368 -- # return 0 00:11:49.070 09:04:25 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.070 09:04:25 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:49.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.070 --rc genhtml_branch_coverage=1 00:11:49.070 --rc genhtml_function_coverage=1 00:11:49.070 --rc genhtml_legend=1 00:11:49.070 --rc geninfo_all_blocks=1 00:11:49.070 --rc geninfo_unexecuted_blocks=1 00:11:49.070 00:11:49.070 ' 00:11:49.070 09:04:25 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:49.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.070 --rc genhtml_branch_coverage=1 00:11:49.070 --rc genhtml_function_coverage=1 00:11:49.070 --rc genhtml_legend=1 00:11:49.070 --rc geninfo_all_blocks=1 00:11:49.070 --rc geninfo_unexecuted_blocks=1 00:11:49.070 00:11:49.070 ' 00:11:49.070 09:04:25 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:49.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.070 --rc genhtml_branch_coverage=1 00:11:49.070 --rc genhtml_function_coverage=1 00:11:49.070 --rc genhtml_legend=1 00:11:49.070 --rc geninfo_all_blocks=1 00:11:49.070 --rc geninfo_unexecuted_blocks=1 00:11:49.070 00:11:49.070 ' 00:11:49.070 09:04:25 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:49.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.070 --rc genhtml_branch_coverage=1 00:11:49.070 --rc genhtml_function_coverage=1 00:11:49.070 --rc genhtml_legend=1 00:11:49.070 --rc geninfo_all_blocks=1 00:11:49.070 --rc geninfo_unexecuted_blocks=1 00:11:49.070 00:11:49.070 ' 00:11:49.070 09:04:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:49.070 09:04:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59702 00:11:49.070 09:04:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:49.070 09:04:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59702 00:11:49.070 09:04:25 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59702 ']' 00:11:49.070 09:04:25 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.070 09:04:25 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:49.070 09:04:25 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.070 09:04:25 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:49.070 09:04:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:49.070 [2024-11-06 09:04:25.463480] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:49.070 [2024-11-06 09:04:25.463923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59702 ] 00:11:49.329 [2024-11-06 09:04:25.636217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.329 [2024-11-06 09:04:25.809505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.701 09:04:26 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:50.701 09:04:26 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:11:50.701 09:04:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:50.701 { 00:11:50.701 "version": "SPDK v25.01-pre git sha1 a59d7e018", 00:11:50.701 "fields": { 00:11:50.701 "major": 25, 00:11:50.701 "minor": 1, 00:11:50.701 "patch": 0, 00:11:50.701 "suffix": "-pre", 00:11:50.701 "commit": "a59d7e018" 00:11:50.701 } 00:11:50.701 } 00:11:50.701 09:04:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:50.701 09:04:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:50.701 09:04:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:50.701 09:04:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:50.701 09:04:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:50.701 09:04:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:50.701 09:04:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:50.701 09:04:27 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.701 09:04:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:50.701 09:04:27 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.958 09:04:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:50.958 09:04:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:50.958 09:04:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:50.958 09:04:27 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:11:50.958 09:04:27 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:50.958 09:04:27 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:50.958 09:04:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.958 09:04:27 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:50.958 09:04:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.958 09:04:27 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:50.958 09:04:27 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:50.958 09:04:27 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:50.958 09:04:27 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:50.958 09:04:27 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:51.243 request: 00:11:51.243 { 00:11:51.243 "method": "env_dpdk_get_mem_stats", 00:11:51.243 "req_id": 1 00:11:51.243 } 00:11:51.243 Got JSON-RPC error response 00:11:51.243 response: 00:11:51.243 { 00:11:51.243 "code": -32601, 00:11:51.243 "message": "Method not found" 00:11:51.243 } 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:51.243 09:04:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59702 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59702 ']' 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59702 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59702 00:11:51.243 killing process with pid 59702 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59702' 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@971 -- # kill 59702 00:11:51.243 09:04:27 app_cmdline -- common/autotest_common.sh@976 -- # wait 59702 00:11:53.819 ************************************ 00:11:53.819 END TEST app_cmdline 00:11:53.819 ************************************ 00:11:53.819 00:11:53.819 real 0m4.916s 00:11:53.819 user 0m5.523s 00:11:53.819 sys 0m0.705s 00:11:53.819 09:04:30 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:53.819 09:04:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:53.819 09:04:30 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:53.819 09:04:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:53.819 09:04:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:53.819 09:04:30 -- common/autotest_common.sh@10 -- # set +x 00:11:53.819 ************************************ 00:11:53.819 START TEST version 00:11:53.819 ************************************ 00:11:53.819 09:04:30 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:53.819 * Looking for test storage... 00:11:53.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:53.819 09:04:30 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:53.819 09:04:30 version -- common/autotest_common.sh@1691 -- # lcov --version 00:11:53.819 09:04:30 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:53.819 09:04:30 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:53.819 09:04:30 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.819 09:04:30 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.819 09:04:30 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.819 09:04:30 version -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.819 09:04:30 version -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.819 09:04:30 version -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.819 09:04:30 version -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.819 09:04:30 version -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.819 09:04:30 version -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.819 09:04:30 version -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.819 09:04:30 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.819 09:04:30 version -- scripts/common.sh@344 -- # case "$op" in 00:11:53.819 09:04:30 version -- scripts/common.sh@345 -- # : 1 00:11:53.819 09:04:30 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.819 09:04:30 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.819 09:04:30 version -- scripts/common.sh@365 -- # decimal 1 00:11:53.819 09:04:30 version -- scripts/common.sh@353 -- # local d=1 00:11:53.819 09:04:30 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.819 09:04:30 version -- scripts/common.sh@355 -- # echo 1 00:11:53.819 09:04:30 version -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.819 09:04:30 version -- scripts/common.sh@366 -- # decimal 2 00:11:53.819 09:04:30 version -- scripts/common.sh@353 -- # local d=2 00:11:53.819 09:04:30 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.819 09:04:30 version -- scripts/common.sh@355 -- # echo 2 00:11:53.819 09:04:30 version -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.819 09:04:30 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.819 09:04:30 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.819 09:04:30 version -- scripts/common.sh@368 -- # return 0 00:11:53.819 09:04:30 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.819 09:04:30 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:53.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.819 --rc genhtml_branch_coverage=1 00:11:53.819 --rc genhtml_function_coverage=1 00:11:53.819 --rc genhtml_legend=1 00:11:53.819 --rc geninfo_all_blocks=1 00:11:53.819 --rc geninfo_unexecuted_blocks=1 00:11:53.819 00:11:53.819 ' 00:11:53.819 09:04:30 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:53.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.819 --rc genhtml_branch_coverage=1 00:11:53.819 --rc genhtml_function_coverage=1 00:11:53.819 --rc genhtml_legend=1 00:11:53.819 --rc geninfo_all_blocks=1 00:11:53.819 --rc geninfo_unexecuted_blocks=1 00:11:53.819 00:11:53.819 ' 00:11:53.819 09:04:30 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:53.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.819 --rc genhtml_branch_coverage=1 00:11:53.819 --rc genhtml_function_coverage=1 00:11:53.819 --rc genhtml_legend=1 00:11:53.819 --rc geninfo_all_blocks=1 00:11:53.819 --rc geninfo_unexecuted_blocks=1 00:11:53.819 00:11:53.819 ' 00:11:53.819 09:04:30 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:53.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.819 --rc genhtml_branch_coverage=1 00:11:53.819 --rc genhtml_function_coverage=1 00:11:53.819 --rc genhtml_legend=1 00:11:53.819 --rc geninfo_all_blocks=1 00:11:53.819 --rc geninfo_unexecuted_blocks=1 00:11:53.819 00:11:53.819 ' 00:11:53.819 09:04:30 version -- app/version.sh@17 -- # get_header_version major 00:11:53.819 09:04:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:53.819 09:04:30 version -- app/version.sh@14 -- # tr -d '"' 00:11:53.819 09:04:30 version -- app/version.sh@14 -- # cut -f2 00:11:53.819 09:04:30 version -- app/version.sh@17 -- # major=25 00:11:53.819 09:04:30 version -- app/version.sh@18 -- # get_header_version minor 00:11:53.819 09:04:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:53.819 09:04:30 version -- app/version.sh@14 -- # cut -f2 00:11:53.819 09:04:30 version -- app/version.sh@14 -- # tr -d '"' 00:11:53.819 09:04:30 version -- app/version.sh@18 -- # minor=1 00:11:53.819 09:04:30 version -- app/version.sh@19 -- # get_header_version patch 00:11:53.819 09:04:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:53.819 09:04:30 version -- app/version.sh@14 -- # cut -f2 00:11:53.819 09:04:30 version -- app/version.sh@14 -- # tr -d '"' 00:11:53.819 09:04:30 version -- app/version.sh@19 -- # patch=0 00:11:53.819 09:04:30 version -- app/version.sh@20 -- # get_header_version suffix 00:11:53.819 09:04:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:53.819 09:04:30 version -- app/version.sh@14 -- # cut -f2 00:11:53.819 09:04:30 version -- app/version.sh@14 -- # tr -d '"' 00:11:53.819 09:04:30 version -- app/version.sh@20 -- # suffix=-pre 00:11:53.820 09:04:30 version -- app/version.sh@22 -- # version=25.1 00:11:53.820 09:04:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:53.820 09:04:30 version -- app/version.sh@28 -- # version=25.1rc0 00:11:53.820 09:04:30 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:53.820 09:04:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:54.078 09:04:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:11:54.078 09:04:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:11:54.078 00:11:54.078 real 0m0.257s 00:11:54.078 user 0m0.165s 00:11:54.078 sys 0m0.131s 00:11:54.078 09:04:30 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:54.078 ************************************ 00:11:54.078 END TEST version 00:11:54.078 ************************************ 00:11:54.079 09:04:30 version -- common/autotest_common.sh@10 -- # set +x 00:11:54.079 09:04:30 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:11:54.079 09:04:30 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:11:54.079 09:04:30 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:11:54.079 09:04:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:54.079 09:04:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.079 09:04:30 -- common/autotest_common.sh@10 -- # set +x 00:11:54.079 ************************************ 00:11:54.079 START TEST bdev_raid 00:11:54.079 ************************************ 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:11:54.079 * Looking for test storage... 00:11:54.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@345 -- # : 1 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.079 09:04:30 bdev_raid -- scripts/common.sh@368 -- # return 0 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:54.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.079 --rc genhtml_branch_coverage=1 00:11:54.079 --rc genhtml_function_coverage=1 00:11:54.079 --rc genhtml_legend=1 00:11:54.079 --rc geninfo_all_blocks=1 00:11:54.079 --rc geninfo_unexecuted_blocks=1 00:11:54.079 00:11:54.079 ' 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:54.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.079 --rc genhtml_branch_coverage=1 00:11:54.079 --rc genhtml_function_coverage=1 00:11:54.079 --rc genhtml_legend=1 00:11:54.079 --rc geninfo_all_blocks=1 00:11:54.079 --rc geninfo_unexecuted_blocks=1 00:11:54.079 00:11:54.079 ' 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:54.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.079 --rc genhtml_branch_coverage=1 00:11:54.079 --rc genhtml_function_coverage=1 00:11:54.079 --rc genhtml_legend=1 00:11:54.079 --rc geninfo_all_blocks=1 00:11:54.079 --rc geninfo_unexecuted_blocks=1 00:11:54.079 00:11:54.079 ' 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:54.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.079 --rc genhtml_branch_coverage=1 00:11:54.079 --rc genhtml_function_coverage=1 00:11:54.079 --rc genhtml_legend=1 00:11:54.079 --rc geninfo_all_blocks=1 00:11:54.079 --rc geninfo_unexecuted_blocks=1 00:11:54.079 00:11:54.079 ' 00:11:54.079 09:04:30 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:54.079 09:04:30 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:11:54.079 09:04:30 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:11:54.079 09:04:30 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:11:54.079 09:04:30 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:11:54.079 09:04:30 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:11:54.079 09:04:30 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:54.079 09:04:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.079 ************************************ 00:11:54.079 START TEST raid1_resize_data_offset_test 00:11:54.079 ************************************ 00:11:54.079 09:04:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:11:54.079 09:04:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59901 00:11:54.079 Process raid pid: 59901 00:11:54.079 09:04:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59901' 00:11:54.079 09:04:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59901 00:11:54.079 09:04:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 59901 ']' 00:11:54.079 09:04:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.079 09:04:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:54.079 09:04:30 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:54.079 09:04:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.079 09:04:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:54.079 09:04:30 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.338 [2024-11-06 09:04:30.715911] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:54.338 [2024-11-06 09:04:30.716105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.597 [2024-11-06 09:04:30.909880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.597 [2024-11-06 09:04:31.074484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.857 [2024-11-06 09:04:31.289034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.857 [2024-11-06 09:04:31.289096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.443 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:55.443 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:11:55.443 09:04:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:11:55.443 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.443 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.443 malloc0 00:11:55.443 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.443 09:04:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.444 malloc1 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.444 null0 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.444 [2024-11-06 09:04:31.855094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:11:55.444 [2024-11-06 09:04:31.857477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:55.444 [2024-11-06 09:04:31.857566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:11:55.444 [2024-11-06 09:04:31.857784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:55.444 [2024-11-06 09:04:31.857807] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:11:55.444 [2024-11-06 09:04:31.858162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:11:55.444 [2024-11-06 09:04:31.858392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:55.444 [2024-11-06 09:04:31.858423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:11:55.444 [2024-11-06 09:04:31.858599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.444 [2024-11-06 09:04:31.915168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.444 09:04:31 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.010 malloc2 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.010 [2024-11-06 09:04:32.459714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:56.010 [2024-11-06 09:04:32.476911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.010 [2024-11-06 09:04:32.479418] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59901 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 59901 ']' 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 59901 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:56.010 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59901 00:11:56.269 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:56.269 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:56.269 killing process with pid 59901 00:11:56.269 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59901' 00:11:56.269 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 59901 00:11:56.269 09:04:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 59901 00:11:56.269 [2024-11-06 09:04:32.564938] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:56.269 [2024-11-06 09:04:32.566228] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:11:56.269 [2024-11-06 09:04:32.566303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.269 [2024-11-06 09:04:32.566330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:11:56.269 [2024-11-06 09:04:32.597333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.269 [2024-11-06 09:04:32.597750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.269 [2024-11-06 09:04:32.597784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:11:58.173 [2024-11-06 09:04:34.230299] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:58.739 09:04:35 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:11:58.739 00:11:58.739 real 0m4.651s 00:11:58.739 user 0m4.564s 00:11:58.739 sys 0m0.651s 00:11:58.739 09:04:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:58.740 ************************************ 00:11:58.740 END TEST raid1_resize_data_offset_test 00:11:58.740 ************************************ 00:11:58.740 09:04:35 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.999 09:04:35 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:11:58.999 09:04:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:58.999 09:04:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:58.999 09:04:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:58.999 ************************************ 00:11:58.999 START TEST raid0_resize_superblock_test 00:11:58.999 ************************************ 00:11:58.999 09:04:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:11:58.999 09:04:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:11:58.999 09:04:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59984 00:11:58.999 09:04:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59984' 00:11:58.999 Process raid pid: 59984 00:11:58.999 09:04:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:58.999 09:04:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59984 00:11:58.999 09:04:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 59984 ']' 00:11:58.999 09:04:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.999 09:04:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:58.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.999 09:04:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.999 09:04:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:58.999 09:04:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.999 [2024-11-06 09:04:35.421787] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:11:58.999 [2024-11-06 09:04:35.422030] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.258 [2024-11-06 09:04:35.610586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.258 [2024-11-06 09:04:35.735438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.516 [2024-11-06 09:04:35.941477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:59.516 [2024-11-06 09:04:35.941556] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.083 09:04:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:00.083 09:04:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:00.083 09:04:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:12:00.083 09:04:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.083 09:04:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.650 malloc0 00:12:00.650 09:04:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.650 09:04:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:00.650 09:04:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.650 09:04:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.650 [2024-11-06 09:04:36.950499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:00.650 [2024-11-06 09:04:36.950595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:00.650 [2024-11-06 09:04:36.950632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:00.650 [2024-11-06 09:04:36.950651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:00.650 [2024-11-06 09:04:36.953639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:00.650 [2024-11-06 09:04:36.953691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:00.650 pt0 00:12:00.650 09:04:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.650 09:04:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:12:00.650 09:04:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.650 09:04:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.650 c0c07df0-c7e3-49c3-a2a2-ef7359da06ab 00:12:00.650 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.650 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.651 2409e629-6c8a-4922-9e0a-fdfe0ae3270c 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.651 955d8b44-7493-44c2-8649-571bdaa9871e 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.651 [2024-11-06 09:04:37.094657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2409e629-6c8a-4922-9e0a-fdfe0ae3270c is claimed 00:12:00.651 [2024-11-06 09:04:37.094836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 955d8b44-7493-44c2-8649-571bdaa9871e is claimed 00:12:00.651 [2024-11-06 09:04:37.095050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:00.651 [2024-11-06 09:04:37.095076] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:12:00.651 [2024-11-06 09:04:37.095434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:00.651 [2024-11-06 09:04:37.095694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:00.651 [2024-11-06 09:04:37.095712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:00.651 [2024-11-06 09:04:37.095965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.651 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:12:00.909 [2024-11-06 09:04:37.206940] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.909 [2024-11-06 09:04:37.258957] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:00.909 [2024-11-06 09:04:37.258999] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2409e629-6c8a-4922-9e0a-fdfe0ae3270c' was resized: old size 131072, new size 204800 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.909 [2024-11-06 09:04:37.266786] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:00.909 [2024-11-06 09:04:37.266844] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '955d8b44-7493-44c2-8649-571bdaa9871e' was resized: old size 131072, new size 204800 00:12:00.909 [2024-11-06 09:04:37.266896] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:12:00.909 [2024-11-06 09:04:37.386995] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.909 [2024-11-06 09:04:37.434706] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:12:00.909 [2024-11-06 09:04:37.434806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:12:00.909 [2024-11-06 09:04:37.434842] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:00.909 [2024-11-06 09:04:37.434869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:12:00.909 [2024-11-06 09:04:37.435029] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.909 [2024-11-06 09:04:37.435082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:00.909 [2024-11-06 09:04:37.435102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.909 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.168 [2024-11-06 09:04:37.442579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:01.168 [2024-11-06 09:04:37.442650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.168 [2024-11-06 09:04:37.442683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:12:01.168 [2024-11-06 09:04:37.442707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.168 [2024-11-06 09:04:37.445611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.168 [2024-11-06 09:04:37.445664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:01.168 pt0 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.168 [2024-11-06 09:04:37.447997] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2409e629-6c8a-4922-9e0a-fdfe0ae3270c 00:12:01.168 [2024-11-06 09:04:37.448070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2409e629-6c8a-4922-9e0a-fdfe0ae3270c is claimed 00:12:01.168 [2024-11-06 09:04:37.448212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 955d8b44-7493-44c2-8649-571bdaa9871e 00:12:01.168 [2024-11-06 09:04:37.448249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 955d8b44-7493-44c2-8649-571bdaa9871e is claimed 00:12:01.168 [2024-11-06 09:04:37.448410] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 955d8b44-7493-44c2-8649-571bdaa9871e (2) smaller than existing raid bdev Raid (3) 00:12:01.168 [2024-11-06 09:04:37.448447] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 2409e629-6c8a-4922-9e0a-fdfe0ae3270c: File exists 00:12:01.168 [2024-11-06 09:04:37.448504] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:01.168 [2024-11-06 09:04:37.448524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:12:01.168 [2024-11-06 09:04:37.448859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:01.168 [2024-11-06 09:04:37.449058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:01.168 [2024-11-06 09:04:37.449073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:12:01.168 [2024-11-06 09:04:37.449257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.168 [2024-11-06 09:04:37.463041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59984 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 59984 ']' 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 59984 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59984 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:01.168 killing process with pid 59984 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59984' 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 59984 00:12:01.168 [2024-11-06 09:04:37.545124] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:01.168 09:04:37 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 59984 00:12:01.168 [2024-11-06 09:04:37.545233] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:01.168 [2024-11-06 09:04:37.545301] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:01.168 [2024-11-06 09:04:37.545317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:12:02.546 [2024-11-06 09:04:38.850605] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.482 09:04:39 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:12:03.482 00:12:03.482 real 0m4.563s 00:12:03.482 user 0m4.877s 00:12:03.482 sys 0m0.642s 00:12:03.482 ************************************ 00:12:03.482 END TEST raid0_resize_superblock_test 00:12:03.482 ************************************ 00:12:03.482 09:04:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:03.482 09:04:39 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.483 09:04:39 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:12:03.483 09:04:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:03.483 09:04:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:03.483 09:04:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:03.483 ************************************ 00:12:03.483 START TEST raid1_resize_superblock_test 00:12:03.483 ************************************ 00:12:03.483 09:04:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:12:03.483 09:04:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:12:03.483 09:04:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60083 00:12:03.483 Process raid pid: 60083 00:12:03.483 09:04:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60083' 00:12:03.483 09:04:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60083 00:12:03.483 09:04:39 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:03.483 09:04:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 60083 ']' 00:12:03.483 09:04:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.483 09:04:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:03.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.483 09:04:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.483 09:04:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:03.483 09:04:39 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.740 [2024-11-06 09:04:40.032423] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:03.740 [2024-11-06 09:04:40.032609] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.740 [2024-11-06 09:04:40.219513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.021 [2024-11-06 09:04:40.347755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.021 [2024-11-06 09:04:40.553924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.021 [2024-11-06 09:04:40.553968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.587 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:04.587 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:04.587 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:12:04.587 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.587 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.154 malloc0 00:12:05.154 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.154 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:05.154 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.154 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.154 [2024-11-06 09:04:41.579052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:05.154 [2024-11-06 09:04:41.579133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.154 [2024-11-06 09:04:41.579180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:05.154 [2024-11-06 09:04:41.579220] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.154 [2024-11-06 09:04:41.582231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.154 [2024-11-06 09:04:41.582303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:05.154 pt0 00:12:05.154 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.154 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:12:05.154 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.154 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.412 21e0a7f4-7296-419f-bed6-c5bb42b533fb 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.412 31e06b08-cbbb-4cd3-84b9-b9c329b8c4db 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.412 6c9b68b9-682f-40a1-86a5-26ede8d66057 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.412 [2024-11-06 09:04:41.724737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 31e06b08-cbbb-4cd3-84b9-b9c329b8c4db is claimed 00:12:05.412 [2024-11-06 09:04:41.724886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6c9b68b9-682f-40a1-86a5-26ede8d66057 is claimed 00:12:05.412 [2024-11-06 09:04:41.725093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:05.412 [2024-11-06 09:04:41.725121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:12:05.412 [2024-11-06 09:04:41.725466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:05.412 [2024-11-06 09:04:41.725730] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:05.412 [2024-11-06 09:04:41.725749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:05.412 [2024-11-06 09:04:41.725971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.412 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:12:05.413 [2024-11-06 09:04:41.837117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.413 [2024-11-06 09:04:41.881065] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:05.413 [2024-11-06 09:04:41.881108] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '31e06b08-cbbb-4cd3-84b9-b9c329b8c4db' was resized: old size 131072, new size 204800 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.413 [2024-11-06 09:04:41.888953] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:05.413 [2024-11-06 09:04:41.888989] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '6c9b68b9-682f-40a1-86a5-26ede8d66057' was resized: old size 131072, new size 204800 00:12:05.413 [2024-11-06 09:04:41.889028] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.413 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.672 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:12:05.672 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:12:05.672 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.672 09:04:41 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:12:05.672 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.672 09:04:41 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.672 [2024-11-06 09:04:42.017138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.672 [2024-11-06 09:04:42.060872] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:12:05.672 [2024-11-06 09:04:42.060975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:12:05.672 [2024-11-06 09:04:42.061013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:12:05.672 [2024-11-06 09:04:42.061243] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.672 [2024-11-06 09:04:42.061509] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.672 [2024-11-06 09:04:42.061621] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.672 [2024-11-06 09:04:42.061645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.672 [2024-11-06 09:04:42.068733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:12:05.672 [2024-11-06 09:04:42.068804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.672 [2024-11-06 09:04:42.068849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:12:05.672 [2024-11-06 09:04:42.068871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.672 [2024-11-06 09:04:42.071724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.672 [2024-11-06 09:04:42.071777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:12:05.672 pt0 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.672 [2024-11-06 09:04:42.074102] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 31e06b08-cbbb-4cd3-84b9-b9c329b8c4db 00:12:05.672 [2024-11-06 09:04:42.074184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 31e06b08-cbbb-4cd3-84b9-b9c329b8c4db is claimed 00:12:05.672 [2024-11-06 09:04:42.074329] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 6c9b68b9-682f-40a1-86a5-26ede8d66057 00:12:05.672 [2024-11-06 09:04:42.074367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6c9b68b9-682f-40a1-86a5-26ede8d66057 is claimed 00:12:05.672 [2024-11-06 09:04:42.074519] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 6c9b68b9-682f-40a1-86a5-26ede8d66057 (2) smaller than existing raid bdev Raid (3) 00:12:05.672 [2024-11-06 09:04:42.074550] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 31e06b08-cbbb-4cd3-84b9-b9c329b8c4db: File exists 00:12:05.672 [2024-11-06 09:04:42.074608] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:05.672 [2024-11-06 09:04:42.074628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:05.672 [2024-11-06 09:04:42.074964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:05.672 [2024-11-06 09:04:42.075170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:05.672 [2024-11-06 09:04:42.075185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:12:05.672 [2024-11-06 09:04:42.075369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.672 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:12:05.673 [2024-11-06 09:04:42.089133] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60083 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 60083 ']' 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 60083 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60083 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:05.673 killing process with pid 60083 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60083' 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 60083 00:12:05.673 [2024-11-06 09:04:42.185677] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.673 [2024-11-06 09:04:42.185786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.673 09:04:42 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 60083 00:12:05.673 [2024-11-06 09:04:42.185890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.673 [2024-11-06 09:04:42.185909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:12:07.045 [2024-11-06 09:04:43.465667] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.977 09:04:44 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:12:07.977 00:12:07.977 real 0m4.574s 00:12:07.977 user 0m4.959s 00:12:07.977 sys 0m0.602s 00:12:07.977 ************************************ 00:12:07.977 END TEST raid1_resize_superblock_test 00:12:07.977 ************************************ 00:12:07.977 09:04:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:07.977 09:04:44 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.235 09:04:44 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:12:08.235 09:04:44 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:12:08.235 09:04:44 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:12:08.235 09:04:44 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:12:08.235 09:04:44 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:12:08.235 09:04:44 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:12:08.235 09:04:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:08.235 09:04:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:08.235 09:04:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:08.235 ************************************ 00:12:08.235 START TEST raid_function_test_raid0 00:12:08.235 ************************************ 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60180 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:08.235 Process raid pid: 60180 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60180' 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60180 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 60180 ']' 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:08.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:08.235 09:04:44 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:08.235 [2024-11-06 09:04:44.678628] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:08.235 [2024-11-06 09:04:44.678845] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.493 [2024-11-06 09:04:44.860503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.493 [2024-11-06 09:04:44.987203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.752 [2024-11-06 09:04:45.183656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.752 [2024-11-06 09:04:45.183719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:09.318 Base_1 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:09.318 Base_2 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:09.318 [2024-11-06 09:04:45.733992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:09.318 [2024-11-06 09:04:45.736477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:09.318 [2024-11-06 09:04:45.736579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:09.318 [2024-11-06 09:04:45.736600] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:09.318 [2024-11-06 09:04:45.736960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:09.318 [2024-11-06 09:04:45.737151] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:09.318 [2024-11-06 09:04:45.737173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:12:09.318 [2024-11-06 09:04:45.737362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:09.318 09:04:45 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:12:09.577 [2024-11-06 09:04:46.038390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:09.577 /dev/nbd0 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.577 1+0 records in 00:12:09.577 1+0 records out 00:12:09.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247564 s, 16.5 MB/s 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:09.577 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:09.835 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:09.835 { 00:12:09.835 "nbd_device": "/dev/nbd0", 00:12:09.835 "bdev_name": "raid" 00:12:09.835 } 00:12:09.835 ]' 00:12:09.835 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:09.835 { 00:12:09.835 "nbd_device": "/dev/nbd0", 00:12:09.835 "bdev_name": "raid" 00:12:09.835 } 00:12:09.835 ]' 00:12:09.835 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:12:10.093 4096+0 records in 00:12:10.093 4096+0 records out 00:12:10.093 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0281786 s, 74.4 MB/s 00:12:10.093 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:12:10.352 4096+0 records in 00:12:10.352 4096+0 records out 00:12:10.352 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.316857 s, 6.6 MB/s 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:12:10.352 128+0 records in 00:12:10.352 128+0 records out 00:12:10.352 65536 bytes (66 kB, 64 KiB) copied, 0.000933864 s, 70.2 MB/s 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:12:10.352 2035+0 records in 00:12:10.352 2035+0 records out 00:12:10.352 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0104513 s, 99.7 MB/s 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:12:10.352 456+0 records in 00:12:10.352 456+0 records out 00:12:10.352 233472 bytes (233 kB, 228 KiB) copied, 0.00232535 s, 100 MB/s 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:10.352 09:04:46 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:10.918 [2024-11-06 09:04:47.175950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:10.918 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60180 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 60180 ']' 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 60180 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60180 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:11.176 killing process with pid 60180 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60180' 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 60180 00:12:11.176 [2024-11-06 09:04:47.539945] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:11.176 09:04:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 60180 00:12:11.176 [2024-11-06 09:04:47.540074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.176 [2024-11-06 09:04:47.540140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.176 [2024-11-06 09:04:47.540163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:12:11.434 [2024-11-06 09:04:47.710413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:12.389 09:04:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:12:12.389 00:12:12.389 real 0m4.133s 00:12:12.389 user 0m5.043s 00:12:12.389 sys 0m0.990s 00:12:12.389 09:04:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:12.389 09:04:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:12:12.389 ************************************ 00:12:12.389 END TEST raid_function_test_raid0 00:12:12.389 ************************************ 00:12:12.389 09:04:48 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:12:12.389 09:04:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:12.389 09:04:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:12.389 09:04:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:12.389 ************************************ 00:12:12.389 START TEST raid_function_test_concat 00:12:12.389 ************************************ 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60309 00:12:12.389 Process raid pid: 60309 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60309' 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60309 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 60309 ']' 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:12.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:12.389 09:04:48 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:12.389 [2024-11-06 09:04:48.849982] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:12.389 [2024-11-06 09:04:48.850158] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.647 [2024-11-06 09:04:49.028746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.648 [2024-11-06 09:04:49.162906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.906 [2024-11-06 09:04:49.368884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.906 [2024-11-06 09:04:49.368995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:13.474 Base_1 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:13.474 Base_2 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:13.474 [2024-11-06 09:04:49.980276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:13.474 [2024-11-06 09:04:49.983109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:13.474 [2024-11-06 09:04:49.983221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:13.474 [2024-11-06 09:04:49.983241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:13.474 [2024-11-06 09:04:49.983603] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:13.474 [2024-11-06 09:04:49.983874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:13.474 [2024-11-06 09:04:49.983906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:12:13.474 [2024-11-06 09:04:49.984213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:12:13.474 09:04:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.732 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:12:13.732 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:12:13.732 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:12:13.732 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.732 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:12:13.732 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:13.732 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:13.732 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:13.732 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:12:13.732 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:13.732 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:13.732 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:12:13.991 [2024-11-06 09:04:50.352498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:13.991 /dev/nbd0 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:13.991 1+0 records in 00:12:13.991 1+0 records out 00:12:13.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029431 s, 13.9 MB/s 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:13.991 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:14.249 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:14.249 { 00:12:14.249 "nbd_device": "/dev/nbd0", 00:12:14.249 "bdev_name": "raid" 00:12:14.249 } 00:12:14.249 ]' 00:12:14.249 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:14.249 { 00:12:14.249 "nbd_device": "/dev/nbd0", 00:12:14.249 "bdev_name": "raid" 00:12:14.249 } 00:12:14.249 ]' 00:12:14.249 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:12:14.508 4096+0 records in 00:12:14.508 4096+0 records out 00:12:14.508 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0335137 s, 62.6 MB/s 00:12:14.508 09:04:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:12:14.767 4096+0 records in 00:12:14.767 4096+0 records out 00:12:14.767 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.372268 s, 5.6 MB/s 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:12:14.767 128+0 records in 00:12:14.767 128+0 records out 00:12:14.767 65536 bytes (66 kB, 64 KiB) copied, 0.000935023 s, 70.1 MB/s 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:12:14.767 2035+0 records in 00:12:14.767 2035+0 records out 00:12:14.767 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0101593 s, 103 MB/s 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:12:14.767 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:12:15.025 456+0 records in 00:12:15.025 456+0 records out 00:12:15.025 233472 bytes (233 kB, 228 KiB) copied, 0.00301322 s, 77.5 MB/s 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.025 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:15.283 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:15.283 [2024-11-06 09:04:51.631212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.283 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:15.283 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:15.283 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.283 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.283 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:15.283 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:12:15.283 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.283 09:04:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:12:15.283 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:12:15.283 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:12:15.541 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:15.541 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:15.541 09:04:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:15.541 09:04:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:15.541 09:04:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:15.541 09:04:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:15.541 09:04:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:12:15.541 09:04:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60309 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 60309 ']' 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 60309 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60309 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:15.542 killing process with pid 60309 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60309' 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 60309 00:12:15.542 [2024-11-06 09:04:52.055998] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:15.542 09:04:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 60309 00:12:15.542 [2024-11-06 09:04:52.056116] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.542 [2024-11-06 09:04:52.056197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.542 [2024-11-06 09:04:52.056217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:12:15.800 [2024-11-06 09:04:52.240796] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:17.175 09:04:53 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:12:17.175 00:12:17.175 real 0m4.507s 00:12:17.175 user 0m5.607s 00:12:17.175 sys 0m1.036s 00:12:17.175 09:04:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:17.175 ************************************ 00:12:17.175 09:04:53 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:12:17.176 END TEST raid_function_test_concat 00:12:17.176 ************************************ 00:12:17.176 09:04:53 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:12:17.176 09:04:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:17.176 09:04:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:17.176 09:04:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:17.176 ************************************ 00:12:17.176 START TEST raid0_resize_test 00:12:17.176 ************************************ 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60449 00:12:17.176 Process raid pid: 60449 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60449' 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60449 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60449 ']' 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:17.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:17.176 09:04:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.176 [2024-11-06 09:04:53.438396] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:17.176 [2024-11-06 09:04:53.438576] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.176 [2024-11-06 09:04:53.628520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.434 [2024-11-06 09:04:53.769941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.692 [2024-11-06 09:04:53.990740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.692 [2024-11-06 09:04:53.990782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.259 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:18.259 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:12:18.259 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:12:18.259 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.260 Base_1 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.260 Base_2 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.260 [2024-11-06 09:04:54.543346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:18.260 [2024-11-06 09:04:54.546010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:18.260 [2024-11-06 09:04:54.546300] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:18.260 [2024-11-06 09:04:54.546327] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:18.260 [2024-11-06 09:04:54.546679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:18.260 [2024-11-06 09:04:54.546882] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:18.260 [2024-11-06 09:04:54.546897] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:18.260 [2024-11-06 09:04:54.547072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.260 [2024-11-06 09:04:54.551328] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:18.260 [2024-11-06 09:04:54.551359] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:12:18.260 true 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.260 [2024-11-06 09:04:54.563584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.260 [2024-11-06 09:04:54.619435] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:18.260 [2024-11-06 09:04:54.619463] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:12:18.260 [2024-11-06 09:04:54.619522] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:12:18.260 true 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.260 [2024-11-06 09:04:54.631550] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60449 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60449 ']' 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 60449 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60449 00:12:18.260 killing process with pid 60449 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60449' 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 60449 00:12:18.260 [2024-11-06 09:04:54.705709] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.260 09:04:54 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 60449 00:12:18.260 [2024-11-06 09:04:54.705812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.260 [2024-11-06 09:04:54.705894] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.260 [2024-11-06 09:04:54.705911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:18.260 [2024-11-06 09:04:54.720755] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.637 ************************************ 00:12:19.637 END TEST raid0_resize_test 00:12:19.637 ************************************ 00:12:19.637 09:04:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:12:19.637 00:12:19.637 real 0m2.412s 00:12:19.637 user 0m2.738s 00:12:19.637 sys 0m0.391s 00:12:19.637 09:04:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:19.637 09:04:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.637 09:04:55 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:12:19.637 09:04:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:19.637 09:04:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:19.637 09:04:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.637 ************************************ 00:12:19.637 START TEST raid1_resize_test 00:12:19.637 ************************************ 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60505 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60505' 00:12:19.637 Process raid pid: 60505 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60505 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 60505 ']' 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:19.637 09:04:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.637 [2024-11-06 09:04:55.902244] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:19.637 [2024-11-06 09:04:55.902682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.637 [2024-11-06 09:04:56.088384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.896 [2024-11-06 09:04:56.220919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.154 [2024-11-06 09:04:56.430173] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.154 [2024-11-06 09:04:56.430402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.413 Base_1 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.413 Base_2 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.413 [2024-11-06 09:04:56.928799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:12:20.413 [2024-11-06 09:04:56.931302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:12:20.413 [2024-11-06 09:04:56.931390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:20.413 [2024-11-06 09:04:56.931408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:20.413 [2024-11-06 09:04:56.931791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:12:20.413 [2024-11-06 09:04:56.931955] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:20.413 [2024-11-06 09:04:56.931972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:12:20.413 [2024-11-06 09:04:56.932156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.413 [2024-11-06 09:04:56.936760] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:20.413 [2024-11-06 09:04:56.936796] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:12:20.413 true 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:12:20.413 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.672 [2024-11-06 09:04:56.949000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.672 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.672 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:12:20.672 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:12:20.672 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:12:20.672 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:12:20.672 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:12:20.672 09:04:56 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:12:20.672 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.672 09:04:56 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.672 [2024-11-06 09:04:57.000741] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:12:20.672 [2024-11-06 09:04:57.000767] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:12:20.672 [2024-11-06 09:04:57.000818] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:12:20.672 true 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.672 [2024-11-06 09:04:57.013023] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60505 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 60505 ']' 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 60505 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60505 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:20.672 killing process with pid 60505 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60505' 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 60505 00:12:20.672 [2024-11-06 09:04:57.092581] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:20.672 09:04:57 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 60505 00:12:20.672 [2024-11-06 09:04:57.092676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.672 [2024-11-06 09:04:57.093341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.672 [2024-11-06 09:04:57.093377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:12:20.672 [2024-11-06 09:04:57.109372] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.639 09:04:58 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:12:21.639 00:12:21.639 real 0m2.320s 00:12:21.639 user 0m2.584s 00:12:21.639 sys 0m0.399s 00:12:21.639 09:04:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:21.639 ************************************ 00:12:21.639 END TEST raid1_resize_test 00:12:21.639 ************************************ 00:12:21.639 09:04:58 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.639 09:04:58 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:12:21.639 09:04:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:21.639 09:04:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:12:21.639 09:04:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:21.639 09:04:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:21.639 09:04:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.639 ************************************ 00:12:21.639 START TEST raid_state_function_test 00:12:21.639 ************************************ 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:21.639 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60568 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60568' 00:12:21.898 Process raid pid: 60568 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60568 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 60568 ']' 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:21.898 09:04:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.898 [2024-11-06 09:04:58.284312] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:21.898 [2024-11-06 09:04:58.284708] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.157 [2024-11-06 09:04:58.465941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.157 [2024-11-06 09:04:58.591575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.416 [2024-11-06 09:04:58.837670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.416 [2024-11-06 09:04:58.837742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.983 [2024-11-06 09:04:59.243391] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:22.983 [2024-11-06 09:04:59.243488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:22.983 [2024-11-06 09:04:59.243505] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:22.983 [2024-11-06 09:04:59.243536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.983 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.983 "name": "Existed_Raid", 00:12:22.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.983 "strip_size_kb": 64, 00:12:22.983 "state": "configuring", 00:12:22.983 "raid_level": "raid0", 00:12:22.983 "superblock": false, 00:12:22.983 "num_base_bdevs": 2, 00:12:22.983 "num_base_bdevs_discovered": 0, 00:12:22.983 "num_base_bdevs_operational": 2, 00:12:22.983 "base_bdevs_list": [ 00:12:22.983 { 00:12:22.983 "name": "BaseBdev1", 00:12:22.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.983 "is_configured": false, 00:12:22.983 "data_offset": 0, 00:12:22.983 "data_size": 0 00:12:22.983 }, 00:12:22.983 { 00:12:22.983 "name": "BaseBdev2", 00:12:22.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.984 "is_configured": false, 00:12:22.984 "data_offset": 0, 00:12:22.984 "data_size": 0 00:12:22.984 } 00:12:22.984 ] 00:12:22.984 }' 00:12:22.984 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.984 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.242 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.242 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.242 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.242 [2024-11-06 09:04:59.739503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.242 [2024-11-06 09:04:59.739544] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:23.242 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.242 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:23.242 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.242 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.242 [2024-11-06 09:04:59.751479] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:23.243 [2024-11-06 09:04:59.751656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:23.243 [2024-11-06 09:04:59.751777] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:23.243 [2024-11-06 09:04:59.751883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:23.243 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.243 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:23.243 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.243 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.501 [2024-11-06 09:04:59.797044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.502 BaseBdev1 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.502 [ 00:12:23.502 { 00:12:23.502 "name": "BaseBdev1", 00:12:23.502 "aliases": [ 00:12:23.502 "a29fc2d0-83a7-4c4d-84c8-95857a37188d" 00:12:23.502 ], 00:12:23.502 "product_name": "Malloc disk", 00:12:23.502 "block_size": 512, 00:12:23.502 "num_blocks": 65536, 00:12:23.502 "uuid": "a29fc2d0-83a7-4c4d-84c8-95857a37188d", 00:12:23.502 "assigned_rate_limits": { 00:12:23.502 "rw_ios_per_sec": 0, 00:12:23.502 "rw_mbytes_per_sec": 0, 00:12:23.502 "r_mbytes_per_sec": 0, 00:12:23.502 "w_mbytes_per_sec": 0 00:12:23.502 }, 00:12:23.502 "claimed": true, 00:12:23.502 "claim_type": "exclusive_write", 00:12:23.502 "zoned": false, 00:12:23.502 "supported_io_types": { 00:12:23.502 "read": true, 00:12:23.502 "write": true, 00:12:23.502 "unmap": true, 00:12:23.502 "flush": true, 00:12:23.502 "reset": true, 00:12:23.502 "nvme_admin": false, 00:12:23.502 "nvme_io": false, 00:12:23.502 "nvme_io_md": false, 00:12:23.502 "write_zeroes": true, 00:12:23.502 "zcopy": true, 00:12:23.502 "get_zone_info": false, 00:12:23.502 "zone_management": false, 00:12:23.502 "zone_append": false, 00:12:23.502 "compare": false, 00:12:23.502 "compare_and_write": false, 00:12:23.502 "abort": true, 00:12:23.502 "seek_hole": false, 00:12:23.502 "seek_data": false, 00:12:23.502 "copy": true, 00:12:23.502 "nvme_iov_md": false 00:12:23.502 }, 00:12:23.502 "memory_domains": [ 00:12:23.502 { 00:12:23.502 "dma_device_id": "system", 00:12:23.502 "dma_device_type": 1 00:12:23.502 }, 00:12:23.502 { 00:12:23.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.502 "dma_device_type": 2 00:12:23.502 } 00:12:23.502 ], 00:12:23.502 "driver_specific": {} 00:12:23.502 } 00:12:23.502 ] 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.502 "name": "Existed_Raid", 00:12:23.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.502 "strip_size_kb": 64, 00:12:23.502 "state": "configuring", 00:12:23.502 "raid_level": "raid0", 00:12:23.502 "superblock": false, 00:12:23.502 "num_base_bdevs": 2, 00:12:23.502 "num_base_bdevs_discovered": 1, 00:12:23.502 "num_base_bdevs_operational": 2, 00:12:23.502 "base_bdevs_list": [ 00:12:23.502 { 00:12:23.502 "name": "BaseBdev1", 00:12:23.502 "uuid": "a29fc2d0-83a7-4c4d-84c8-95857a37188d", 00:12:23.502 "is_configured": true, 00:12:23.502 "data_offset": 0, 00:12:23.502 "data_size": 65536 00:12:23.502 }, 00:12:23.502 { 00:12:23.502 "name": "BaseBdev2", 00:12:23.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.502 "is_configured": false, 00:12:23.502 "data_offset": 0, 00:12:23.502 "data_size": 0 00:12:23.502 } 00:12:23.502 ] 00:12:23.502 }' 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.502 09:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.070 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:24.070 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.071 [2024-11-06 09:05:00.357263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:24.071 [2024-11-06 09:05:00.357322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.071 [2024-11-06 09:05:00.365299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.071 [2024-11-06 09:05:00.367895] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:24.071 [2024-11-06 09:05:00.367949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.071 "name": "Existed_Raid", 00:12:24.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.071 "strip_size_kb": 64, 00:12:24.071 "state": "configuring", 00:12:24.071 "raid_level": "raid0", 00:12:24.071 "superblock": false, 00:12:24.071 "num_base_bdevs": 2, 00:12:24.071 "num_base_bdevs_discovered": 1, 00:12:24.071 "num_base_bdevs_operational": 2, 00:12:24.071 "base_bdevs_list": [ 00:12:24.071 { 00:12:24.071 "name": "BaseBdev1", 00:12:24.071 "uuid": "a29fc2d0-83a7-4c4d-84c8-95857a37188d", 00:12:24.071 "is_configured": true, 00:12:24.071 "data_offset": 0, 00:12:24.071 "data_size": 65536 00:12:24.071 }, 00:12:24.071 { 00:12:24.071 "name": "BaseBdev2", 00:12:24.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.071 "is_configured": false, 00:12:24.071 "data_offset": 0, 00:12:24.071 "data_size": 0 00:12:24.071 } 00:12:24.071 ] 00:12:24.071 }' 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.071 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.390 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:24.390 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.390 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.650 [2024-11-06 09:05:00.941019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.650 [2024-11-06 09:05:00.941273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:24.650 [2024-11-06 09:05:00.941330] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:24.650 [2024-11-06 09:05:00.941880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:24.650 [2024-11-06 09:05:00.942218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:24.650 [2024-11-06 09:05:00.942250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:24.650 [2024-11-06 09:05:00.942599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.650 BaseBdev2 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.650 [ 00:12:24.650 { 00:12:24.650 "name": "BaseBdev2", 00:12:24.650 "aliases": [ 00:12:24.650 "9e3f5be9-9885-4080-8f58-9c30bfb29aae" 00:12:24.650 ], 00:12:24.650 "product_name": "Malloc disk", 00:12:24.650 "block_size": 512, 00:12:24.650 "num_blocks": 65536, 00:12:24.650 "uuid": "9e3f5be9-9885-4080-8f58-9c30bfb29aae", 00:12:24.650 "assigned_rate_limits": { 00:12:24.650 "rw_ios_per_sec": 0, 00:12:24.650 "rw_mbytes_per_sec": 0, 00:12:24.650 "r_mbytes_per_sec": 0, 00:12:24.650 "w_mbytes_per_sec": 0 00:12:24.650 }, 00:12:24.650 "claimed": true, 00:12:24.650 "claim_type": "exclusive_write", 00:12:24.650 "zoned": false, 00:12:24.650 "supported_io_types": { 00:12:24.650 "read": true, 00:12:24.650 "write": true, 00:12:24.650 "unmap": true, 00:12:24.650 "flush": true, 00:12:24.650 "reset": true, 00:12:24.650 "nvme_admin": false, 00:12:24.650 "nvme_io": false, 00:12:24.650 "nvme_io_md": false, 00:12:24.650 "write_zeroes": true, 00:12:24.650 "zcopy": true, 00:12:24.650 "get_zone_info": false, 00:12:24.650 "zone_management": false, 00:12:24.650 "zone_append": false, 00:12:24.650 "compare": false, 00:12:24.650 "compare_and_write": false, 00:12:24.650 "abort": true, 00:12:24.650 "seek_hole": false, 00:12:24.650 "seek_data": false, 00:12:24.650 "copy": true, 00:12:24.650 "nvme_iov_md": false 00:12:24.650 }, 00:12:24.650 "memory_domains": [ 00:12:24.650 { 00:12:24.650 "dma_device_id": "system", 00:12:24.650 "dma_device_type": 1 00:12:24.650 }, 00:12:24.650 { 00:12:24.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.650 "dma_device_type": 2 00:12:24.650 } 00:12:24.650 ], 00:12:24.650 "driver_specific": {} 00:12:24.650 } 00:12:24.650 ] 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.650 09:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.651 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.651 "name": "Existed_Raid", 00:12:24.651 "uuid": "b503e831-989d-4853-91f8-a356917bfcb9", 00:12:24.651 "strip_size_kb": 64, 00:12:24.651 "state": "online", 00:12:24.651 "raid_level": "raid0", 00:12:24.651 "superblock": false, 00:12:24.651 "num_base_bdevs": 2, 00:12:24.651 "num_base_bdevs_discovered": 2, 00:12:24.651 "num_base_bdevs_operational": 2, 00:12:24.651 "base_bdevs_list": [ 00:12:24.651 { 00:12:24.651 "name": "BaseBdev1", 00:12:24.651 "uuid": "a29fc2d0-83a7-4c4d-84c8-95857a37188d", 00:12:24.651 "is_configured": true, 00:12:24.651 "data_offset": 0, 00:12:24.651 "data_size": 65536 00:12:24.651 }, 00:12:24.651 { 00:12:24.651 "name": "BaseBdev2", 00:12:24.651 "uuid": "9e3f5be9-9885-4080-8f58-9c30bfb29aae", 00:12:24.651 "is_configured": true, 00:12:24.651 "data_offset": 0, 00:12:24.651 "data_size": 65536 00:12:24.651 } 00:12:24.651 ] 00:12:24.651 }' 00:12:24.651 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.651 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.217 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:25.217 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:25.217 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.217 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.217 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.217 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.217 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:25.217 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.217 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.217 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.217 [2024-11-06 09:05:01.517606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.217 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.217 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.217 "name": "Existed_Raid", 00:12:25.217 "aliases": [ 00:12:25.217 "b503e831-989d-4853-91f8-a356917bfcb9" 00:12:25.217 ], 00:12:25.217 "product_name": "Raid Volume", 00:12:25.217 "block_size": 512, 00:12:25.217 "num_blocks": 131072, 00:12:25.217 "uuid": "b503e831-989d-4853-91f8-a356917bfcb9", 00:12:25.218 "assigned_rate_limits": { 00:12:25.218 "rw_ios_per_sec": 0, 00:12:25.218 "rw_mbytes_per_sec": 0, 00:12:25.218 "r_mbytes_per_sec": 0, 00:12:25.218 "w_mbytes_per_sec": 0 00:12:25.218 }, 00:12:25.218 "claimed": false, 00:12:25.218 "zoned": false, 00:12:25.218 "supported_io_types": { 00:12:25.218 "read": true, 00:12:25.218 "write": true, 00:12:25.218 "unmap": true, 00:12:25.218 "flush": true, 00:12:25.218 "reset": true, 00:12:25.218 "nvme_admin": false, 00:12:25.218 "nvme_io": false, 00:12:25.218 "nvme_io_md": false, 00:12:25.218 "write_zeroes": true, 00:12:25.218 "zcopy": false, 00:12:25.218 "get_zone_info": false, 00:12:25.218 "zone_management": false, 00:12:25.218 "zone_append": false, 00:12:25.218 "compare": false, 00:12:25.218 "compare_and_write": false, 00:12:25.218 "abort": false, 00:12:25.218 "seek_hole": false, 00:12:25.218 "seek_data": false, 00:12:25.218 "copy": false, 00:12:25.218 "nvme_iov_md": false 00:12:25.218 }, 00:12:25.218 "memory_domains": [ 00:12:25.218 { 00:12:25.218 "dma_device_id": "system", 00:12:25.218 "dma_device_type": 1 00:12:25.218 }, 00:12:25.218 { 00:12:25.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.218 "dma_device_type": 2 00:12:25.218 }, 00:12:25.218 { 00:12:25.218 "dma_device_id": "system", 00:12:25.218 "dma_device_type": 1 00:12:25.218 }, 00:12:25.218 { 00:12:25.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.218 "dma_device_type": 2 00:12:25.218 } 00:12:25.218 ], 00:12:25.218 "driver_specific": { 00:12:25.218 "raid": { 00:12:25.218 "uuid": "b503e831-989d-4853-91f8-a356917bfcb9", 00:12:25.218 "strip_size_kb": 64, 00:12:25.218 "state": "online", 00:12:25.218 "raid_level": "raid0", 00:12:25.218 "superblock": false, 00:12:25.218 "num_base_bdevs": 2, 00:12:25.218 "num_base_bdevs_discovered": 2, 00:12:25.218 "num_base_bdevs_operational": 2, 00:12:25.218 "base_bdevs_list": [ 00:12:25.218 { 00:12:25.218 "name": "BaseBdev1", 00:12:25.218 "uuid": "a29fc2d0-83a7-4c4d-84c8-95857a37188d", 00:12:25.218 "is_configured": true, 00:12:25.218 "data_offset": 0, 00:12:25.218 "data_size": 65536 00:12:25.218 }, 00:12:25.218 { 00:12:25.218 "name": "BaseBdev2", 00:12:25.218 "uuid": "9e3f5be9-9885-4080-8f58-9c30bfb29aae", 00:12:25.218 "is_configured": true, 00:12:25.218 "data_offset": 0, 00:12:25.218 "data_size": 65536 00:12:25.218 } 00:12:25.218 ] 00:12:25.218 } 00:12:25.218 } 00:12:25.218 }' 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:25.218 BaseBdev2' 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.218 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.478 [2024-11-06 09:05:01.769390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:25.478 [2024-11-06 09:05:01.769429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:25.478 [2024-11-06 09:05:01.769489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.478 "name": "Existed_Raid", 00:12:25.478 "uuid": "b503e831-989d-4853-91f8-a356917bfcb9", 00:12:25.478 "strip_size_kb": 64, 00:12:25.478 "state": "offline", 00:12:25.478 "raid_level": "raid0", 00:12:25.478 "superblock": false, 00:12:25.478 "num_base_bdevs": 2, 00:12:25.478 "num_base_bdevs_discovered": 1, 00:12:25.478 "num_base_bdevs_operational": 1, 00:12:25.478 "base_bdevs_list": [ 00:12:25.478 { 00:12:25.478 "name": null, 00:12:25.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.478 "is_configured": false, 00:12:25.478 "data_offset": 0, 00:12:25.478 "data_size": 65536 00:12:25.478 }, 00:12:25.478 { 00:12:25.478 "name": "BaseBdev2", 00:12:25.478 "uuid": "9e3f5be9-9885-4080-8f58-9c30bfb29aae", 00:12:25.478 "is_configured": true, 00:12:25.478 "data_offset": 0, 00:12:25.478 "data_size": 65536 00:12:25.478 } 00:12:25.478 ] 00:12:25.478 }' 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.478 09:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.045 [2024-11-06 09:05:02.424223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:26.045 [2024-11-06 09:05:02.424475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60568 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 60568 ']' 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 60568 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:26.045 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60568 00:12:26.304 killing process with pid 60568 00:12:26.304 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:26.304 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:26.304 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60568' 00:12:26.304 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 60568 00:12:26.304 [2024-11-06 09:05:02.604112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:26.304 09:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 60568 00:12:26.304 [2024-11-06 09:05:02.618982] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:27.239 ************************************ 00:12:27.239 END TEST raid_state_function_test 00:12:27.239 ************************************ 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:27.239 00:12:27.239 real 0m5.489s 00:12:27.239 user 0m8.296s 00:12:27.239 sys 0m0.773s 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.239 09:05:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:12:27.239 09:05:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:27.239 09:05:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:27.239 09:05:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:27.239 ************************************ 00:12:27.239 START TEST raid_state_function_test_sb 00:12:27.239 ************************************ 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60821 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60821' 00:12:27.239 Process raid pid: 60821 00:12:27.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60821 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 60821 ']' 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:27.239 09:05:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.538 [2024-11-06 09:05:03.831533] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:27.538 [2024-11-06 09:05:03.831726] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.538 [2024-11-06 09:05:04.020511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.797 [2024-11-06 09:05:04.148175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.056 [2024-11-06 09:05:04.353157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.056 [2024-11-06 09:05:04.353222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.314 [2024-11-06 09:05:04.833045] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:28.314 [2024-11-06 09:05:04.833109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:28.314 [2024-11-06 09:05:04.833127] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.314 [2024-11-06 09:05:04.833144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.314 09:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.573 09:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.573 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.573 "name": "Existed_Raid", 00:12:28.573 "uuid": "987ec649-0635-4af6-a402-147436ebe64c", 00:12:28.573 "strip_size_kb": 64, 00:12:28.573 "state": "configuring", 00:12:28.573 "raid_level": "raid0", 00:12:28.573 "superblock": true, 00:12:28.573 "num_base_bdevs": 2, 00:12:28.573 "num_base_bdevs_discovered": 0, 00:12:28.573 "num_base_bdevs_operational": 2, 00:12:28.573 "base_bdevs_list": [ 00:12:28.573 { 00:12:28.573 "name": "BaseBdev1", 00:12:28.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.573 "is_configured": false, 00:12:28.573 "data_offset": 0, 00:12:28.573 "data_size": 0 00:12:28.573 }, 00:12:28.573 { 00:12:28.573 "name": "BaseBdev2", 00:12:28.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.573 "is_configured": false, 00:12:28.573 "data_offset": 0, 00:12:28.573 "data_size": 0 00:12:28.573 } 00:12:28.573 ] 00:12:28.573 }' 00:12:28.573 09:05:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.573 09:05:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.831 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:28.831 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.831 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.831 [2024-11-06 09:05:05.309168] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:28.831 [2024-11-06 09:05:05.309241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:28.831 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.831 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:28.831 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.831 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.831 [2024-11-06 09:05:05.317180] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:28.831 [2024-11-06 09:05:05.317257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:28.831 [2024-11-06 09:05:05.317273] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.831 [2024-11-06 09:05:05.317293] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.831 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.831 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:28.831 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.831 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.831 [2024-11-06 09:05:05.363993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.090 BaseBdev1 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.090 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.090 [ 00:12:29.090 { 00:12:29.090 "name": "BaseBdev1", 00:12:29.090 "aliases": [ 00:12:29.090 "0d5f4406-82f7-401c-9c55-ffe099827e5d" 00:12:29.090 ], 00:12:29.090 "product_name": "Malloc disk", 00:12:29.090 "block_size": 512, 00:12:29.090 "num_blocks": 65536, 00:12:29.090 "uuid": "0d5f4406-82f7-401c-9c55-ffe099827e5d", 00:12:29.090 "assigned_rate_limits": { 00:12:29.090 "rw_ios_per_sec": 0, 00:12:29.090 "rw_mbytes_per_sec": 0, 00:12:29.090 "r_mbytes_per_sec": 0, 00:12:29.091 "w_mbytes_per_sec": 0 00:12:29.091 }, 00:12:29.091 "claimed": true, 00:12:29.091 "claim_type": "exclusive_write", 00:12:29.091 "zoned": false, 00:12:29.091 "supported_io_types": { 00:12:29.091 "read": true, 00:12:29.091 "write": true, 00:12:29.091 "unmap": true, 00:12:29.091 "flush": true, 00:12:29.091 "reset": true, 00:12:29.091 "nvme_admin": false, 00:12:29.091 "nvme_io": false, 00:12:29.091 "nvme_io_md": false, 00:12:29.091 "write_zeroes": true, 00:12:29.091 "zcopy": true, 00:12:29.091 "get_zone_info": false, 00:12:29.091 "zone_management": false, 00:12:29.091 "zone_append": false, 00:12:29.091 "compare": false, 00:12:29.091 "compare_and_write": false, 00:12:29.091 "abort": true, 00:12:29.091 "seek_hole": false, 00:12:29.091 "seek_data": false, 00:12:29.091 "copy": true, 00:12:29.091 "nvme_iov_md": false 00:12:29.091 }, 00:12:29.091 "memory_domains": [ 00:12:29.091 { 00:12:29.091 "dma_device_id": "system", 00:12:29.091 "dma_device_type": 1 00:12:29.091 }, 00:12:29.091 { 00:12:29.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.091 "dma_device_type": 2 00:12:29.091 } 00:12:29.091 ], 00:12:29.091 "driver_specific": {} 00:12:29.091 } 00:12:29.091 ] 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.091 "name": "Existed_Raid", 00:12:29.091 "uuid": "365de8f7-d682-4fad-94fd-e6e257186bb9", 00:12:29.091 "strip_size_kb": 64, 00:12:29.091 "state": "configuring", 00:12:29.091 "raid_level": "raid0", 00:12:29.091 "superblock": true, 00:12:29.091 "num_base_bdevs": 2, 00:12:29.091 "num_base_bdevs_discovered": 1, 00:12:29.091 "num_base_bdevs_operational": 2, 00:12:29.091 "base_bdevs_list": [ 00:12:29.091 { 00:12:29.091 "name": "BaseBdev1", 00:12:29.091 "uuid": "0d5f4406-82f7-401c-9c55-ffe099827e5d", 00:12:29.091 "is_configured": true, 00:12:29.091 "data_offset": 2048, 00:12:29.091 "data_size": 63488 00:12:29.091 }, 00:12:29.091 { 00:12:29.091 "name": "BaseBdev2", 00:12:29.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.091 "is_configured": false, 00:12:29.091 "data_offset": 0, 00:12:29.091 "data_size": 0 00:12:29.091 } 00:12:29.091 ] 00:12:29.091 }' 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.091 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.657 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:29.657 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.657 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.657 [2024-11-06 09:05:05.920204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:29.657 [2024-11-06 09:05:05.920281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:29.657 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.657 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:29.657 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.657 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.657 [2024-11-06 09:05:05.932282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.657 [2024-11-06 09:05:05.934911] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:29.657 [2024-11-06 09:05:05.935086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:29.657 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.657 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.658 "name": "Existed_Raid", 00:12:29.658 "uuid": "7690e282-c0c3-4b43-9b4f-885a5b3195c4", 00:12:29.658 "strip_size_kb": 64, 00:12:29.658 "state": "configuring", 00:12:29.658 "raid_level": "raid0", 00:12:29.658 "superblock": true, 00:12:29.658 "num_base_bdevs": 2, 00:12:29.658 "num_base_bdevs_discovered": 1, 00:12:29.658 "num_base_bdevs_operational": 2, 00:12:29.658 "base_bdevs_list": [ 00:12:29.658 { 00:12:29.658 "name": "BaseBdev1", 00:12:29.658 "uuid": "0d5f4406-82f7-401c-9c55-ffe099827e5d", 00:12:29.658 "is_configured": true, 00:12:29.658 "data_offset": 2048, 00:12:29.658 "data_size": 63488 00:12:29.658 }, 00:12:29.658 { 00:12:29.658 "name": "BaseBdev2", 00:12:29.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.658 "is_configured": false, 00:12:29.658 "data_offset": 0, 00:12:29.658 "data_size": 0 00:12:29.658 } 00:12:29.658 ] 00:12:29.658 }' 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.658 09:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.917 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:29.917 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.917 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.175 [2024-11-06 09:05:06.482433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.175 [2024-11-06 09:05:06.483018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:30.175 [2024-11-06 09:05:06.483045] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:30.175 BaseBdev2 00:12:30.175 [2024-11-06 09:05:06.483419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:30.176 [2024-11-06 09:05:06.483654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:30.176 [2024-11-06 09:05:06.483683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:30.176 [2024-11-06 09:05:06.483859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.176 [ 00:12:30.176 { 00:12:30.176 "name": "BaseBdev2", 00:12:30.176 "aliases": [ 00:12:30.176 "86e1bc04-6fb8-4c87-84d5-1382b6122258" 00:12:30.176 ], 00:12:30.176 "product_name": "Malloc disk", 00:12:30.176 "block_size": 512, 00:12:30.176 "num_blocks": 65536, 00:12:30.176 "uuid": "86e1bc04-6fb8-4c87-84d5-1382b6122258", 00:12:30.176 "assigned_rate_limits": { 00:12:30.176 "rw_ios_per_sec": 0, 00:12:30.176 "rw_mbytes_per_sec": 0, 00:12:30.176 "r_mbytes_per_sec": 0, 00:12:30.176 "w_mbytes_per_sec": 0 00:12:30.176 }, 00:12:30.176 "claimed": true, 00:12:30.176 "claim_type": "exclusive_write", 00:12:30.176 "zoned": false, 00:12:30.176 "supported_io_types": { 00:12:30.176 "read": true, 00:12:30.176 "write": true, 00:12:30.176 "unmap": true, 00:12:30.176 "flush": true, 00:12:30.176 "reset": true, 00:12:30.176 "nvme_admin": false, 00:12:30.176 "nvme_io": false, 00:12:30.176 "nvme_io_md": false, 00:12:30.176 "write_zeroes": true, 00:12:30.176 "zcopy": true, 00:12:30.176 "get_zone_info": false, 00:12:30.176 "zone_management": false, 00:12:30.176 "zone_append": false, 00:12:30.176 "compare": false, 00:12:30.176 "compare_and_write": false, 00:12:30.176 "abort": true, 00:12:30.176 "seek_hole": false, 00:12:30.176 "seek_data": false, 00:12:30.176 "copy": true, 00:12:30.176 "nvme_iov_md": false 00:12:30.176 }, 00:12:30.176 "memory_domains": [ 00:12:30.176 { 00:12:30.176 "dma_device_id": "system", 00:12:30.176 "dma_device_type": 1 00:12:30.176 }, 00:12:30.176 { 00:12:30.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.176 "dma_device_type": 2 00:12:30.176 } 00:12:30.176 ], 00:12:30.176 "driver_specific": {} 00:12:30.176 } 00:12:30.176 ] 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.176 "name": "Existed_Raid", 00:12:30.176 "uuid": "7690e282-c0c3-4b43-9b4f-885a5b3195c4", 00:12:30.176 "strip_size_kb": 64, 00:12:30.176 "state": "online", 00:12:30.176 "raid_level": "raid0", 00:12:30.176 "superblock": true, 00:12:30.176 "num_base_bdevs": 2, 00:12:30.176 "num_base_bdevs_discovered": 2, 00:12:30.176 "num_base_bdevs_operational": 2, 00:12:30.176 "base_bdevs_list": [ 00:12:30.176 { 00:12:30.176 "name": "BaseBdev1", 00:12:30.176 "uuid": "0d5f4406-82f7-401c-9c55-ffe099827e5d", 00:12:30.176 "is_configured": true, 00:12:30.176 "data_offset": 2048, 00:12:30.176 "data_size": 63488 00:12:30.176 }, 00:12:30.176 { 00:12:30.176 "name": "BaseBdev2", 00:12:30.176 "uuid": "86e1bc04-6fb8-4c87-84d5-1382b6122258", 00:12:30.176 "is_configured": true, 00:12:30.176 "data_offset": 2048, 00:12:30.176 "data_size": 63488 00:12:30.176 } 00:12:30.176 ] 00:12:30.176 }' 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.176 09:05:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.744 [2024-11-06 09:05:07.055069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:30.744 "name": "Existed_Raid", 00:12:30.744 "aliases": [ 00:12:30.744 "7690e282-c0c3-4b43-9b4f-885a5b3195c4" 00:12:30.744 ], 00:12:30.744 "product_name": "Raid Volume", 00:12:30.744 "block_size": 512, 00:12:30.744 "num_blocks": 126976, 00:12:30.744 "uuid": "7690e282-c0c3-4b43-9b4f-885a5b3195c4", 00:12:30.744 "assigned_rate_limits": { 00:12:30.744 "rw_ios_per_sec": 0, 00:12:30.744 "rw_mbytes_per_sec": 0, 00:12:30.744 "r_mbytes_per_sec": 0, 00:12:30.744 "w_mbytes_per_sec": 0 00:12:30.744 }, 00:12:30.744 "claimed": false, 00:12:30.744 "zoned": false, 00:12:30.744 "supported_io_types": { 00:12:30.744 "read": true, 00:12:30.744 "write": true, 00:12:30.744 "unmap": true, 00:12:30.744 "flush": true, 00:12:30.744 "reset": true, 00:12:30.744 "nvme_admin": false, 00:12:30.744 "nvme_io": false, 00:12:30.744 "nvme_io_md": false, 00:12:30.744 "write_zeroes": true, 00:12:30.744 "zcopy": false, 00:12:30.744 "get_zone_info": false, 00:12:30.744 "zone_management": false, 00:12:30.744 "zone_append": false, 00:12:30.744 "compare": false, 00:12:30.744 "compare_and_write": false, 00:12:30.744 "abort": false, 00:12:30.744 "seek_hole": false, 00:12:30.744 "seek_data": false, 00:12:30.744 "copy": false, 00:12:30.744 "nvme_iov_md": false 00:12:30.744 }, 00:12:30.744 "memory_domains": [ 00:12:30.744 { 00:12:30.744 "dma_device_id": "system", 00:12:30.744 "dma_device_type": 1 00:12:30.744 }, 00:12:30.744 { 00:12:30.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.744 "dma_device_type": 2 00:12:30.744 }, 00:12:30.744 { 00:12:30.744 "dma_device_id": "system", 00:12:30.744 "dma_device_type": 1 00:12:30.744 }, 00:12:30.744 { 00:12:30.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.744 "dma_device_type": 2 00:12:30.744 } 00:12:30.744 ], 00:12:30.744 "driver_specific": { 00:12:30.744 "raid": { 00:12:30.744 "uuid": "7690e282-c0c3-4b43-9b4f-885a5b3195c4", 00:12:30.744 "strip_size_kb": 64, 00:12:30.744 "state": "online", 00:12:30.744 "raid_level": "raid0", 00:12:30.744 "superblock": true, 00:12:30.744 "num_base_bdevs": 2, 00:12:30.744 "num_base_bdevs_discovered": 2, 00:12:30.744 "num_base_bdevs_operational": 2, 00:12:30.744 "base_bdevs_list": [ 00:12:30.744 { 00:12:30.744 "name": "BaseBdev1", 00:12:30.744 "uuid": "0d5f4406-82f7-401c-9c55-ffe099827e5d", 00:12:30.744 "is_configured": true, 00:12:30.744 "data_offset": 2048, 00:12:30.744 "data_size": 63488 00:12:30.744 }, 00:12:30.744 { 00:12:30.744 "name": "BaseBdev2", 00:12:30.744 "uuid": "86e1bc04-6fb8-4c87-84d5-1382b6122258", 00:12:30.744 "is_configured": true, 00:12:30.744 "data_offset": 2048, 00:12:30.744 "data_size": 63488 00:12:30.744 } 00:12:30.744 ] 00:12:30.744 } 00:12:30.744 } 00:12:30.744 }' 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:30.744 BaseBdev2' 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.744 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:30.745 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:30.745 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:30.745 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.745 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.745 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:30.745 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.004 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.004 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.004 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.004 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.005 [2024-11-06 09:05:07.350880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:31.005 [2024-11-06 09:05:07.350933] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.005 [2024-11-06 09:05:07.350998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.005 "name": "Existed_Raid", 00:12:31.005 "uuid": "7690e282-c0c3-4b43-9b4f-885a5b3195c4", 00:12:31.005 "strip_size_kb": 64, 00:12:31.005 "state": "offline", 00:12:31.005 "raid_level": "raid0", 00:12:31.005 "superblock": true, 00:12:31.005 "num_base_bdevs": 2, 00:12:31.005 "num_base_bdevs_discovered": 1, 00:12:31.005 "num_base_bdevs_operational": 1, 00:12:31.005 "base_bdevs_list": [ 00:12:31.005 { 00:12:31.005 "name": null, 00:12:31.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.005 "is_configured": false, 00:12:31.005 "data_offset": 0, 00:12:31.005 "data_size": 63488 00:12:31.005 }, 00:12:31.005 { 00:12:31.005 "name": "BaseBdev2", 00:12:31.005 "uuid": "86e1bc04-6fb8-4c87-84d5-1382b6122258", 00:12:31.005 "is_configured": true, 00:12:31.005 "data_offset": 2048, 00:12:31.005 "data_size": 63488 00:12:31.005 } 00:12:31.005 ] 00:12:31.005 }' 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.005 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.576 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:31.576 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:31.576 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.576 09:05:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:31.576 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.576 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.576 09:05:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.576 09:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:31.576 09:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:31.576 09:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:31.576 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.576 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.576 [2024-11-06 09:05:08.018585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:31.576 [2024-11-06 09:05:08.018789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:31.576 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.576 09:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:31.576 09:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:31.576 09:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.576 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.576 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.576 09:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60821 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 60821 ']' 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 60821 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60821 00:12:31.834 killing process with pid 60821 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60821' 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 60821 00:12:31.834 [2024-11-06 09:05:08.193033] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.834 09:05:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 60821 00:12:31.834 [2024-11-06 09:05:08.207805] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.770 09:05:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:32.770 00:12:32.770 real 0m5.501s 00:12:32.770 user 0m8.324s 00:12:32.770 sys 0m0.793s 00:12:32.770 ************************************ 00:12:32.770 END TEST raid_state_function_test_sb 00:12:32.770 ************************************ 00:12:32.770 09:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:32.770 09:05:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.770 09:05:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:12:32.770 09:05:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:32.770 09:05:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:32.770 09:05:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.770 ************************************ 00:12:32.770 START TEST raid_superblock_test 00:12:32.770 ************************************ 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61078 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61078 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 61078 ']' 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:32.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:32.770 09:05:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.028 [2024-11-06 09:05:09.374488] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:33.028 [2024-11-06 09:05:09.375042] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61078 ] 00:12:33.028 [2024-11-06 09:05:09.557867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.289 [2024-11-06 09:05:09.722003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.548 [2024-11-06 09:05:09.935924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.548 [2024-11-06 09:05:09.935992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.127 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:34.127 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:34.127 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:34.127 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:34.127 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:34.127 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:34.127 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:34.127 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.128 malloc1 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.128 [2024-11-06 09:05:10.410672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:34.128 [2024-11-06 09:05:10.410763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.128 [2024-11-06 09:05:10.410798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:34.128 [2024-11-06 09:05:10.410828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.128 [2024-11-06 09:05:10.413547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.128 [2024-11-06 09:05:10.413590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:34.128 pt1 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.128 malloc2 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.128 [2024-11-06 09:05:10.463022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:34.128 [2024-11-06 09:05:10.463086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.128 [2024-11-06 09:05:10.463117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:34.128 [2024-11-06 09:05:10.463131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.128 [2024-11-06 09:05:10.465874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.128 [2024-11-06 09:05:10.466050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:34.128 pt2 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.128 [2024-11-06 09:05:10.475101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:34.128 [2024-11-06 09:05:10.477488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:34.128 [2024-11-06 09:05:10.477868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:34.128 [2024-11-06 09:05:10.477893] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:34.128 [2024-11-06 09:05:10.478223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:34.128 [2024-11-06 09:05:10.478411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:34.128 [2024-11-06 09:05:10.478433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:34.128 [2024-11-06 09:05:10.478622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.128 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.128 "name": "raid_bdev1", 00:12:34.128 "uuid": "3d22a718-0242-4609-8be4-c6c93b321684", 00:12:34.128 "strip_size_kb": 64, 00:12:34.128 "state": "online", 00:12:34.128 "raid_level": "raid0", 00:12:34.128 "superblock": true, 00:12:34.128 "num_base_bdevs": 2, 00:12:34.128 "num_base_bdevs_discovered": 2, 00:12:34.128 "num_base_bdevs_operational": 2, 00:12:34.128 "base_bdevs_list": [ 00:12:34.128 { 00:12:34.128 "name": "pt1", 00:12:34.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:34.128 "is_configured": true, 00:12:34.128 "data_offset": 2048, 00:12:34.128 "data_size": 63488 00:12:34.128 }, 00:12:34.128 { 00:12:34.129 "name": "pt2", 00:12:34.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.129 "is_configured": true, 00:12:34.129 "data_offset": 2048, 00:12:34.129 "data_size": 63488 00:12:34.129 } 00:12:34.129 ] 00:12:34.129 }' 00:12:34.129 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.129 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.724 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:34.724 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:34.724 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:34.724 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:34.724 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:34.724 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:34.724 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.724 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.724 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.724 09:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:34.724 [2024-11-06 09:05:10.959643] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.724 09:05:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.724 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:34.724 "name": "raid_bdev1", 00:12:34.724 "aliases": [ 00:12:34.724 "3d22a718-0242-4609-8be4-c6c93b321684" 00:12:34.724 ], 00:12:34.724 "product_name": "Raid Volume", 00:12:34.724 "block_size": 512, 00:12:34.724 "num_blocks": 126976, 00:12:34.724 "uuid": "3d22a718-0242-4609-8be4-c6c93b321684", 00:12:34.724 "assigned_rate_limits": { 00:12:34.724 "rw_ios_per_sec": 0, 00:12:34.724 "rw_mbytes_per_sec": 0, 00:12:34.724 "r_mbytes_per_sec": 0, 00:12:34.724 "w_mbytes_per_sec": 0 00:12:34.724 }, 00:12:34.724 "claimed": false, 00:12:34.724 "zoned": false, 00:12:34.724 "supported_io_types": { 00:12:34.724 "read": true, 00:12:34.724 "write": true, 00:12:34.724 "unmap": true, 00:12:34.724 "flush": true, 00:12:34.724 "reset": true, 00:12:34.724 "nvme_admin": false, 00:12:34.724 "nvme_io": false, 00:12:34.724 "nvme_io_md": false, 00:12:34.724 "write_zeroes": true, 00:12:34.724 "zcopy": false, 00:12:34.724 "get_zone_info": false, 00:12:34.725 "zone_management": false, 00:12:34.725 "zone_append": false, 00:12:34.725 "compare": false, 00:12:34.725 "compare_and_write": false, 00:12:34.725 "abort": false, 00:12:34.725 "seek_hole": false, 00:12:34.725 "seek_data": false, 00:12:34.725 "copy": false, 00:12:34.725 "nvme_iov_md": false 00:12:34.725 }, 00:12:34.725 "memory_domains": [ 00:12:34.725 { 00:12:34.725 "dma_device_id": "system", 00:12:34.725 "dma_device_type": 1 00:12:34.725 }, 00:12:34.725 { 00:12:34.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.725 "dma_device_type": 2 00:12:34.725 }, 00:12:34.725 { 00:12:34.725 "dma_device_id": "system", 00:12:34.725 "dma_device_type": 1 00:12:34.725 }, 00:12:34.725 { 00:12:34.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.725 "dma_device_type": 2 00:12:34.725 } 00:12:34.725 ], 00:12:34.725 "driver_specific": { 00:12:34.725 "raid": { 00:12:34.725 "uuid": "3d22a718-0242-4609-8be4-c6c93b321684", 00:12:34.725 "strip_size_kb": 64, 00:12:34.725 "state": "online", 00:12:34.725 "raid_level": "raid0", 00:12:34.725 "superblock": true, 00:12:34.725 "num_base_bdevs": 2, 00:12:34.725 "num_base_bdevs_discovered": 2, 00:12:34.725 "num_base_bdevs_operational": 2, 00:12:34.725 "base_bdevs_list": [ 00:12:34.725 { 00:12:34.725 "name": "pt1", 00:12:34.725 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:34.725 "is_configured": true, 00:12:34.725 "data_offset": 2048, 00:12:34.725 "data_size": 63488 00:12:34.725 }, 00:12:34.725 { 00:12:34.725 "name": "pt2", 00:12:34.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.725 "is_configured": true, 00:12:34.725 "data_offset": 2048, 00:12:34.725 "data_size": 63488 00:12:34.725 } 00:12:34.725 ] 00:12:34.725 } 00:12:34.725 } 00:12:34.725 }' 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:34.725 pt2' 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.725 [2024-11-06 09:05:11.223694] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:34.725 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3d22a718-0242-4609-8be4-c6c93b321684 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3d22a718-0242-4609-8be4-c6c93b321684 ']' 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 [2024-11-06 09:05:11.267237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.985 [2024-11-06 09:05:11.267268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.985 [2024-11-06 09:05:11.267362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.985 [2024-11-06 09:05:11.267425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:34.985 [2024-11-06 09:05:11.267445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 [2024-11-06 09:05:11.403395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:34.985 [2024-11-06 09:05:11.406536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:34.985 [2024-11-06 09:05:11.407213] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:34.985 [2024-11-06 09:05:11.407337] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:34.985 [2024-11-06 09:05:11.407391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:34.985 [2024-11-06 09:05:11.407425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:34.985 request: 00:12:34.985 { 00:12:34.985 "name": "raid_bdev1", 00:12:34.985 "raid_level": "raid0", 00:12:34.985 "base_bdevs": [ 00:12:34.985 "malloc1", 00:12:34.985 "malloc2" 00:12:34.985 ], 00:12:34.985 "strip_size_kb": 64, 00:12:34.985 "superblock": false, 00:12:34.985 "method": "bdev_raid_create", 00:12:34.985 "req_id": 1 00:12:34.985 } 00:12:34.985 Got JSON-RPC error response 00:12:34.985 response: 00:12:34.985 { 00:12:34.985 "code": -17, 00:12:34.985 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:34.985 } 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 [2024-11-06 09:05:11.463323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:34.985 [2024-11-06 09:05:11.463533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.985 [2024-11-06 09:05:11.463608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:34.985 [2024-11-06 09:05:11.463720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.985 [2024-11-06 09:05:11.466687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.985 [2024-11-06 09:05:11.466857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:34.985 [2024-11-06 09:05:11.467065] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:34.985 [2024-11-06 09:05:11.467241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:34.985 pt1 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.985 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.244 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.244 "name": "raid_bdev1", 00:12:35.244 "uuid": "3d22a718-0242-4609-8be4-c6c93b321684", 00:12:35.244 "strip_size_kb": 64, 00:12:35.244 "state": "configuring", 00:12:35.244 "raid_level": "raid0", 00:12:35.244 "superblock": true, 00:12:35.244 "num_base_bdevs": 2, 00:12:35.244 "num_base_bdevs_discovered": 1, 00:12:35.244 "num_base_bdevs_operational": 2, 00:12:35.244 "base_bdevs_list": [ 00:12:35.244 { 00:12:35.244 "name": "pt1", 00:12:35.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:35.244 "is_configured": true, 00:12:35.244 "data_offset": 2048, 00:12:35.244 "data_size": 63488 00:12:35.244 }, 00:12:35.244 { 00:12:35.244 "name": null, 00:12:35.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.244 "is_configured": false, 00:12:35.244 "data_offset": 2048, 00:12:35.244 "data_size": 63488 00:12:35.244 } 00:12:35.244 ] 00:12:35.244 }' 00:12:35.244 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.244 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.503 [2024-11-06 09:05:11.979754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:35.503 [2024-11-06 09:05:11.980029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.503 [2024-11-06 09:05:11.980070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:35.503 [2024-11-06 09:05:11.980089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.503 [2024-11-06 09:05:11.980726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.503 [2024-11-06 09:05:11.980762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:35.503 [2024-11-06 09:05:11.980906] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:35.503 [2024-11-06 09:05:11.980944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:35.503 [2024-11-06 09:05:11.981080] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:35.503 [2024-11-06 09:05:11.981100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:35.503 [2024-11-06 09:05:11.981390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:35.503 [2024-11-06 09:05:11.981606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:35.503 [2024-11-06 09:05:11.981622] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:35.503 [2024-11-06 09:05:11.981799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.503 pt2 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.503 09:05:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.503 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.761 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.761 "name": "raid_bdev1", 00:12:35.761 "uuid": "3d22a718-0242-4609-8be4-c6c93b321684", 00:12:35.761 "strip_size_kb": 64, 00:12:35.761 "state": "online", 00:12:35.761 "raid_level": "raid0", 00:12:35.761 "superblock": true, 00:12:35.761 "num_base_bdevs": 2, 00:12:35.761 "num_base_bdevs_discovered": 2, 00:12:35.761 "num_base_bdevs_operational": 2, 00:12:35.761 "base_bdevs_list": [ 00:12:35.761 { 00:12:35.761 "name": "pt1", 00:12:35.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:35.761 "is_configured": true, 00:12:35.761 "data_offset": 2048, 00:12:35.761 "data_size": 63488 00:12:35.761 }, 00:12:35.761 { 00:12:35.761 "name": "pt2", 00:12:35.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.761 "is_configured": true, 00:12:35.761 "data_offset": 2048, 00:12:35.761 "data_size": 63488 00:12:35.761 } 00:12:35.761 ] 00:12:35.761 }' 00:12:35.761 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.761 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.020 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:36.020 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:36.020 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:36.020 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:36.020 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:36.020 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:36.020 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:36.020 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.020 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.020 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:36.278 [2024-11-06 09:05:12.556216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:36.278 "name": "raid_bdev1", 00:12:36.278 "aliases": [ 00:12:36.278 "3d22a718-0242-4609-8be4-c6c93b321684" 00:12:36.278 ], 00:12:36.278 "product_name": "Raid Volume", 00:12:36.278 "block_size": 512, 00:12:36.278 "num_blocks": 126976, 00:12:36.278 "uuid": "3d22a718-0242-4609-8be4-c6c93b321684", 00:12:36.278 "assigned_rate_limits": { 00:12:36.278 "rw_ios_per_sec": 0, 00:12:36.278 "rw_mbytes_per_sec": 0, 00:12:36.278 "r_mbytes_per_sec": 0, 00:12:36.278 "w_mbytes_per_sec": 0 00:12:36.278 }, 00:12:36.278 "claimed": false, 00:12:36.278 "zoned": false, 00:12:36.278 "supported_io_types": { 00:12:36.278 "read": true, 00:12:36.278 "write": true, 00:12:36.278 "unmap": true, 00:12:36.278 "flush": true, 00:12:36.278 "reset": true, 00:12:36.278 "nvme_admin": false, 00:12:36.278 "nvme_io": false, 00:12:36.278 "nvme_io_md": false, 00:12:36.278 "write_zeroes": true, 00:12:36.278 "zcopy": false, 00:12:36.278 "get_zone_info": false, 00:12:36.278 "zone_management": false, 00:12:36.278 "zone_append": false, 00:12:36.278 "compare": false, 00:12:36.278 "compare_and_write": false, 00:12:36.278 "abort": false, 00:12:36.278 "seek_hole": false, 00:12:36.278 "seek_data": false, 00:12:36.278 "copy": false, 00:12:36.278 "nvme_iov_md": false 00:12:36.278 }, 00:12:36.278 "memory_domains": [ 00:12:36.278 { 00:12:36.278 "dma_device_id": "system", 00:12:36.278 "dma_device_type": 1 00:12:36.278 }, 00:12:36.278 { 00:12:36.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.278 "dma_device_type": 2 00:12:36.278 }, 00:12:36.278 { 00:12:36.278 "dma_device_id": "system", 00:12:36.278 "dma_device_type": 1 00:12:36.278 }, 00:12:36.278 { 00:12:36.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.278 "dma_device_type": 2 00:12:36.278 } 00:12:36.278 ], 00:12:36.278 "driver_specific": { 00:12:36.278 "raid": { 00:12:36.278 "uuid": "3d22a718-0242-4609-8be4-c6c93b321684", 00:12:36.278 "strip_size_kb": 64, 00:12:36.278 "state": "online", 00:12:36.278 "raid_level": "raid0", 00:12:36.278 "superblock": true, 00:12:36.278 "num_base_bdevs": 2, 00:12:36.278 "num_base_bdevs_discovered": 2, 00:12:36.278 "num_base_bdevs_operational": 2, 00:12:36.278 "base_bdevs_list": [ 00:12:36.278 { 00:12:36.278 "name": "pt1", 00:12:36.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:36.278 "is_configured": true, 00:12:36.278 "data_offset": 2048, 00:12:36.278 "data_size": 63488 00:12:36.278 }, 00:12:36.278 { 00:12:36.278 "name": "pt2", 00:12:36.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:36.278 "is_configured": true, 00:12:36.278 "data_offset": 2048, 00:12:36.278 "data_size": 63488 00:12:36.278 } 00:12:36.278 ] 00:12:36.278 } 00:12:36.278 } 00:12:36.278 }' 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:36.278 pt2' 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:36.278 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.537 [2024-11-06 09:05:12.828280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3d22a718-0242-4609-8be4-c6c93b321684 '!=' 3d22a718-0242-4609-8be4-c6c93b321684 ']' 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61078 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 61078 ']' 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 61078 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61078 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61078' 00:12:36.537 killing process with pid 61078 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 61078 00:12:36.537 [2024-11-06 09:05:12.903645] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:36.537 09:05:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 61078 00:12:36.537 [2024-11-06 09:05:12.903757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.537 [2024-11-06 09:05:12.903834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.537 [2024-11-06 09:05:12.903868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:36.795 [2024-11-06 09:05:13.084262] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.731 09:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:37.731 00:12:37.731 real 0m4.825s 00:12:37.731 user 0m7.137s 00:12:37.731 sys 0m0.707s 00:12:37.731 09:05:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:37.731 ************************************ 00:12:37.731 END TEST raid_superblock_test 00:12:37.731 ************************************ 00:12:37.731 09:05:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.731 09:05:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:12:37.731 09:05:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:37.731 09:05:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:37.731 09:05:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.731 ************************************ 00:12:37.731 START TEST raid_read_error_test 00:12:37.731 ************************************ 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nm3vqGDNic 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61290 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61290 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 61290 ']' 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:37.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:37.731 09:05:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.001 [2024-11-06 09:05:14.268484] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:38.001 [2024-11-06 09:05:14.268666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61290 ] 00:12:38.001 [2024-11-06 09:05:14.452926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.258 [2024-11-06 09:05:14.578122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.258 [2024-11-06 09:05:14.778806] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.258 [2024-11-06 09:05:14.778885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.826 BaseBdev1_malloc 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.826 true 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.826 [2024-11-06 09:05:15.317865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:38.826 [2024-11-06 09:05:15.317933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.826 [2024-11-06 09:05:15.317970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:38.826 [2024-11-06 09:05:15.317992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.826 [2024-11-06 09:05:15.321223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.826 [2024-11-06 09:05:15.321287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.826 BaseBdev1 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.826 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.085 BaseBdev2_malloc 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.085 true 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.085 [2024-11-06 09:05:15.374371] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:39.085 [2024-11-06 09:05:15.374613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.085 [2024-11-06 09:05:15.374649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:39.085 [2024-11-06 09:05:15.374668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.085 [2024-11-06 09:05:15.377430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.085 [2024-11-06 09:05:15.377479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:39.085 BaseBdev2 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.085 [2024-11-06 09:05:15.382529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.085 [2024-11-06 09:05:15.384994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.085 [2024-11-06 09:05:15.385247] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:39.085 [2024-11-06 09:05:15.385272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:39.085 [2024-11-06 09:05:15.385539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:39.085 [2024-11-06 09:05:15.385801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:39.085 [2024-11-06 09:05:15.385821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:39.085 [2024-11-06 09:05:15.386033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.085 "name": "raid_bdev1", 00:12:39.085 "uuid": "b733201a-2a6e-44a1-878a-735ab5537b28", 00:12:39.085 "strip_size_kb": 64, 00:12:39.085 "state": "online", 00:12:39.085 "raid_level": "raid0", 00:12:39.085 "superblock": true, 00:12:39.085 "num_base_bdevs": 2, 00:12:39.085 "num_base_bdevs_discovered": 2, 00:12:39.085 "num_base_bdevs_operational": 2, 00:12:39.085 "base_bdevs_list": [ 00:12:39.085 { 00:12:39.085 "name": "BaseBdev1", 00:12:39.085 "uuid": "64510a7f-b9b2-55df-9151-9e8e5201afc9", 00:12:39.085 "is_configured": true, 00:12:39.085 "data_offset": 2048, 00:12:39.085 "data_size": 63488 00:12:39.085 }, 00:12:39.085 { 00:12:39.085 "name": "BaseBdev2", 00:12:39.085 "uuid": "f1cfea52-c8d3-5f32-9765-63214aa22d0a", 00:12:39.085 "is_configured": true, 00:12:39.085 "data_offset": 2048, 00:12:39.085 "data_size": 63488 00:12:39.085 } 00:12:39.085 ] 00:12:39.085 }' 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.085 09:05:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.651 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:39.651 09:05:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:39.651 [2024-11-06 09:05:16.000158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:40.586 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:40.586 09:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.587 "name": "raid_bdev1", 00:12:40.587 "uuid": "b733201a-2a6e-44a1-878a-735ab5537b28", 00:12:40.587 "strip_size_kb": 64, 00:12:40.587 "state": "online", 00:12:40.587 "raid_level": "raid0", 00:12:40.587 "superblock": true, 00:12:40.587 "num_base_bdevs": 2, 00:12:40.587 "num_base_bdevs_discovered": 2, 00:12:40.587 "num_base_bdevs_operational": 2, 00:12:40.587 "base_bdevs_list": [ 00:12:40.587 { 00:12:40.587 "name": "BaseBdev1", 00:12:40.587 "uuid": "64510a7f-b9b2-55df-9151-9e8e5201afc9", 00:12:40.587 "is_configured": true, 00:12:40.587 "data_offset": 2048, 00:12:40.587 "data_size": 63488 00:12:40.587 }, 00:12:40.587 { 00:12:40.587 "name": "BaseBdev2", 00:12:40.587 "uuid": "f1cfea52-c8d3-5f32-9765-63214aa22d0a", 00:12:40.587 "is_configured": true, 00:12:40.587 "data_offset": 2048, 00:12:40.587 "data_size": 63488 00:12:40.587 } 00:12:40.587 ] 00:12:40.587 }' 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.587 09:05:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.155 [2024-11-06 09:05:17.475290] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.155 [2024-11-06 09:05:17.475329] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.155 [2024-11-06 09:05:17.478793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.155 [2024-11-06 09:05:17.479047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.155 [2024-11-06 09:05:17.479108] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.155 [2024-11-06 09:05:17.479128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:41.155 { 00:12:41.155 "results": [ 00:12:41.155 { 00:12:41.155 "job": "raid_bdev1", 00:12:41.155 "core_mask": "0x1", 00:12:41.155 "workload": "randrw", 00:12:41.155 "percentage": 50, 00:12:41.155 "status": "finished", 00:12:41.155 "queue_depth": 1, 00:12:41.155 "io_size": 131072, 00:12:41.155 "runtime": 1.472735, 00:12:41.155 "iops": 11269.508771095954, 00:12:41.155 "mibps": 1408.6885963869943, 00:12:41.155 "io_failed": 1, 00:12:41.155 "io_timeout": 0, 00:12:41.155 "avg_latency_us": 124.06332942632739, 00:12:41.155 "min_latency_us": 36.53818181818182, 00:12:41.155 "max_latency_us": 1966.08 00:12:41.155 } 00:12:41.155 ], 00:12:41.155 "core_count": 1 00:12:41.155 } 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61290 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 61290 ']' 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 61290 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61290 00:12:41.155 killing process with pid 61290 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61290' 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 61290 00:12:41.155 [2024-11-06 09:05:17.534888] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:41.155 09:05:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 61290 00:12:41.155 [2024-11-06 09:05:17.649506] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.531 09:05:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nm3vqGDNic 00:12:42.531 09:05:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:42.531 09:05:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:42.531 09:05:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.68 00:12:42.531 09:05:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:42.531 09:05:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:42.531 09:05:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:42.531 ************************************ 00:12:42.531 END TEST raid_read_error_test 00:12:42.531 ************************************ 00:12:42.531 09:05:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.68 != \0\.\0\0 ]] 00:12:42.531 00:12:42.531 real 0m4.587s 00:12:42.531 user 0m5.771s 00:12:42.531 sys 0m0.581s 00:12:42.531 09:05:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:42.531 09:05:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.531 09:05:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:12:42.531 09:05:18 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:42.531 09:05:18 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:42.531 09:05:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.531 ************************************ 00:12:42.531 START TEST raid_write_error_test 00:12:42.531 ************************************ 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3kUK1f9x1h 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61441 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61441 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 61441 ']' 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:42.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:42.531 09:05:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.531 [2024-11-06 09:05:18.906845] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:42.531 [2024-11-06 09:05:18.907223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61441 ] 00:12:42.790 [2024-11-06 09:05:19.093085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.790 [2024-11-06 09:05:19.227009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.049 [2024-11-06 09:05:19.427017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.049 [2024-11-06 09:05:19.427086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.618 BaseBdev1_malloc 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.618 true 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.618 [2024-11-06 09:05:19.972067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:43.618 [2024-11-06 09:05:19.972145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.618 [2024-11-06 09:05:19.972181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:43.618 [2024-11-06 09:05:19.972209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.618 [2024-11-06 09:05:19.975098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.618 [2024-11-06 09:05:19.975147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:43.618 BaseBdev1 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.618 09:05:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.618 BaseBdev2_malloc 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.618 true 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.618 [2024-11-06 09:05:20.031197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:43.618 [2024-11-06 09:05:20.031276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:43.618 [2024-11-06 09:05:20.031300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:43.618 [2024-11-06 09:05:20.031316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:43.618 [2024-11-06 09:05:20.034155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:43.618 [2024-11-06 09:05:20.034213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:43.618 BaseBdev2 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.618 [2024-11-06 09:05:20.039287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.618 [2024-11-06 09:05:20.041727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.618 [2024-11-06 09:05:20.041989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:43.618 [2024-11-06 09:05:20.042031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:43.618 [2024-11-06 09:05:20.042318] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:43.618 [2024-11-06 09:05:20.042526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:43.618 [2024-11-06 09:05:20.042544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:43.618 [2024-11-06 09:05:20.042729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.618 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.618 "name": "raid_bdev1", 00:12:43.619 "uuid": "ec387249-67ab-4529-98f9-ed56a18f2182", 00:12:43.619 "strip_size_kb": 64, 00:12:43.619 "state": "online", 00:12:43.619 "raid_level": "raid0", 00:12:43.619 "superblock": true, 00:12:43.619 "num_base_bdevs": 2, 00:12:43.619 "num_base_bdevs_discovered": 2, 00:12:43.619 "num_base_bdevs_operational": 2, 00:12:43.619 "base_bdevs_list": [ 00:12:43.619 { 00:12:43.619 "name": "BaseBdev1", 00:12:43.619 "uuid": "fc0fe735-52f4-51bf-98f5-7fea36eaf865", 00:12:43.619 "is_configured": true, 00:12:43.619 "data_offset": 2048, 00:12:43.619 "data_size": 63488 00:12:43.619 }, 00:12:43.619 { 00:12:43.619 "name": "BaseBdev2", 00:12:43.619 "uuid": "6440ccc0-06e6-5c73-896d-12ce856f24a4", 00:12:43.619 "is_configured": true, 00:12:43.619 "data_offset": 2048, 00:12:43.619 "data_size": 63488 00:12:43.619 } 00:12:43.619 ] 00:12:43.619 }' 00:12:43.619 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.619 09:05:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.185 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:44.185 09:05:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:44.185 [2024-11-06 09:05:20.680777] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.122 "name": "raid_bdev1", 00:12:45.122 "uuid": "ec387249-67ab-4529-98f9-ed56a18f2182", 00:12:45.122 "strip_size_kb": 64, 00:12:45.122 "state": "online", 00:12:45.122 "raid_level": "raid0", 00:12:45.122 "superblock": true, 00:12:45.122 "num_base_bdevs": 2, 00:12:45.122 "num_base_bdevs_discovered": 2, 00:12:45.122 "num_base_bdevs_operational": 2, 00:12:45.122 "base_bdevs_list": [ 00:12:45.122 { 00:12:45.122 "name": "BaseBdev1", 00:12:45.122 "uuid": "fc0fe735-52f4-51bf-98f5-7fea36eaf865", 00:12:45.122 "is_configured": true, 00:12:45.122 "data_offset": 2048, 00:12:45.122 "data_size": 63488 00:12:45.122 }, 00:12:45.122 { 00:12:45.122 "name": "BaseBdev2", 00:12:45.122 "uuid": "6440ccc0-06e6-5c73-896d-12ce856f24a4", 00:12:45.122 "is_configured": true, 00:12:45.122 "data_offset": 2048, 00:12:45.122 "data_size": 63488 00:12:45.122 } 00:12:45.122 ] 00:12:45.122 }' 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.122 09:05:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.690 [2024-11-06 09:05:22.083811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:45.690 [2024-11-06 09:05:22.083883] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.690 [2024-11-06 09:05:22.087490] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.690 [2024-11-06 09:05:22.087541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.690 [2024-11-06 09:05:22.087580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:45.690 [2024-11-06 09:05:22.087596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:45.690 { 00:12:45.690 "results": [ 00:12:45.690 { 00:12:45.690 "job": "raid_bdev1", 00:12:45.690 "core_mask": "0x1", 00:12:45.690 "workload": "randrw", 00:12:45.690 "percentage": 50, 00:12:45.690 "status": "finished", 00:12:45.690 "queue_depth": 1, 00:12:45.690 "io_size": 131072, 00:12:45.690 "runtime": 1.40055, 00:12:45.690 "iops": 10909.285637785157, 00:12:45.690 "mibps": 1363.6607047231446, 00:12:45.690 "io_failed": 1, 00:12:45.690 "io_timeout": 0, 00:12:45.690 "avg_latency_us": 128.02152879581152, 00:12:45.690 "min_latency_us": 37.46909090909091, 00:12:45.690 "max_latency_us": 2338.4436363636364 00:12:45.690 } 00:12:45.690 ], 00:12:45.690 "core_count": 1 00:12:45.690 } 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61441 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 61441 ']' 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 61441 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61441 00:12:45.690 killing process with pid 61441 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61441' 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 61441 00:12:45.690 [2024-11-06 09:05:22.124680] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:45.690 09:05:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 61441 00:12:45.949 [2024-11-06 09:05:22.242714] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:46.885 09:05:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3kUK1f9x1h 00:12:46.885 09:05:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:46.885 09:05:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:46.885 09:05:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:46.885 09:05:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:46.885 ************************************ 00:12:46.885 END TEST raid_write_error_test 00:12:46.885 ************************************ 00:12:46.885 09:05:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:46.885 09:05:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:46.885 09:05:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:46.885 00:12:46.885 real 0m4.537s 00:12:46.885 user 0m5.753s 00:12:46.885 sys 0m0.537s 00:12:46.885 09:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:46.885 09:05:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.885 09:05:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:46.885 09:05:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:12:46.885 09:05:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:46.885 09:05:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:46.885 09:05:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:46.885 ************************************ 00:12:46.885 START TEST raid_state_function_test 00:12:46.885 ************************************ 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:46.885 Process raid pid: 61579 00:12:46.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61579 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61579' 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61579 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 61579 ']' 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:46.885 09:05:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.144 [2024-11-06 09:05:23.490706] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:47.144 [2024-11-06 09:05:23.491179] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.144 [2024-11-06 09:05:23.672742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.403 [2024-11-06 09:05:23.814147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.662 [2024-11-06 09:05:24.024688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.662 [2024-11-06 09:05:24.024877] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.921 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:47.921 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:12:47.921 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:47.921 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.921 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.921 [2024-11-06 09:05:24.448407] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.921 [2024-11-06 09:05:24.448671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.921 [2024-11-06 09:05:24.448700] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.921 [2024-11-06 09:05:24.448719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.179 "name": "Existed_Raid", 00:12:48.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.179 "strip_size_kb": 64, 00:12:48.179 "state": "configuring", 00:12:48.179 "raid_level": "concat", 00:12:48.179 "superblock": false, 00:12:48.179 "num_base_bdevs": 2, 00:12:48.179 "num_base_bdevs_discovered": 0, 00:12:48.179 "num_base_bdevs_operational": 2, 00:12:48.179 "base_bdevs_list": [ 00:12:48.179 { 00:12:48.179 "name": "BaseBdev1", 00:12:48.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.179 "is_configured": false, 00:12:48.179 "data_offset": 0, 00:12:48.179 "data_size": 0 00:12:48.179 }, 00:12:48.179 { 00:12:48.179 "name": "BaseBdev2", 00:12:48.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.179 "is_configured": false, 00:12:48.179 "data_offset": 0, 00:12:48.179 "data_size": 0 00:12:48.179 } 00:12:48.179 ] 00:12:48.179 }' 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.179 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.437 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:48.437 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.437 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.437 [2024-11-06 09:05:24.948504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.437 [2024-11-06 09:05:24.948542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:48.437 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.437 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:48.437 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.437 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.437 [2024-11-06 09:05:24.956488] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.437 [2024-11-06 09:05:24.956551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.437 [2024-11-06 09:05:24.956566] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.437 [2024-11-06 09:05:24.956584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.437 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.437 09:05:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:48.437 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.437 09:05:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.696 [2024-11-06 09:05:25.003169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.696 BaseBdev1 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.696 [ 00:12:48.696 { 00:12:48.696 "name": "BaseBdev1", 00:12:48.696 "aliases": [ 00:12:48.696 "e841497c-40d3-4d59-b97d-265a6e7a6b10" 00:12:48.696 ], 00:12:48.696 "product_name": "Malloc disk", 00:12:48.696 "block_size": 512, 00:12:48.696 "num_blocks": 65536, 00:12:48.696 "uuid": "e841497c-40d3-4d59-b97d-265a6e7a6b10", 00:12:48.696 "assigned_rate_limits": { 00:12:48.696 "rw_ios_per_sec": 0, 00:12:48.696 "rw_mbytes_per_sec": 0, 00:12:48.696 "r_mbytes_per_sec": 0, 00:12:48.696 "w_mbytes_per_sec": 0 00:12:48.696 }, 00:12:48.696 "claimed": true, 00:12:48.696 "claim_type": "exclusive_write", 00:12:48.696 "zoned": false, 00:12:48.696 "supported_io_types": { 00:12:48.696 "read": true, 00:12:48.696 "write": true, 00:12:48.696 "unmap": true, 00:12:48.696 "flush": true, 00:12:48.696 "reset": true, 00:12:48.696 "nvme_admin": false, 00:12:48.696 "nvme_io": false, 00:12:48.696 "nvme_io_md": false, 00:12:48.696 "write_zeroes": true, 00:12:48.696 "zcopy": true, 00:12:48.696 "get_zone_info": false, 00:12:48.696 "zone_management": false, 00:12:48.696 "zone_append": false, 00:12:48.696 "compare": false, 00:12:48.696 "compare_and_write": false, 00:12:48.696 "abort": true, 00:12:48.696 "seek_hole": false, 00:12:48.696 "seek_data": false, 00:12:48.696 "copy": true, 00:12:48.696 "nvme_iov_md": false 00:12:48.696 }, 00:12:48.696 "memory_domains": [ 00:12:48.696 { 00:12:48.696 "dma_device_id": "system", 00:12:48.696 "dma_device_type": 1 00:12:48.696 }, 00:12:48.696 { 00:12:48.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.696 "dma_device_type": 2 00:12:48.696 } 00:12:48.696 ], 00:12:48.696 "driver_specific": {} 00:12:48.696 } 00:12:48.696 ] 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.696 "name": "Existed_Raid", 00:12:48.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.696 "strip_size_kb": 64, 00:12:48.696 "state": "configuring", 00:12:48.696 "raid_level": "concat", 00:12:48.696 "superblock": false, 00:12:48.696 "num_base_bdevs": 2, 00:12:48.696 "num_base_bdevs_discovered": 1, 00:12:48.696 "num_base_bdevs_operational": 2, 00:12:48.696 "base_bdevs_list": [ 00:12:48.696 { 00:12:48.696 "name": "BaseBdev1", 00:12:48.696 "uuid": "e841497c-40d3-4d59-b97d-265a6e7a6b10", 00:12:48.696 "is_configured": true, 00:12:48.696 "data_offset": 0, 00:12:48.696 "data_size": 65536 00:12:48.696 }, 00:12:48.696 { 00:12:48.696 "name": "BaseBdev2", 00:12:48.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.696 "is_configured": false, 00:12:48.696 "data_offset": 0, 00:12:48.696 "data_size": 0 00:12:48.696 } 00:12:48.696 ] 00:12:48.696 }' 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.696 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.264 [2024-11-06 09:05:25.539490] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.264 [2024-11-06 09:05:25.539705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.264 [2024-11-06 09:05:25.547556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.264 [2024-11-06 09:05:25.550341] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.264 [2024-11-06 09:05:25.550552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.264 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.265 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.265 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.265 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.265 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.265 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.265 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.265 "name": "Existed_Raid", 00:12:49.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.265 "strip_size_kb": 64, 00:12:49.265 "state": "configuring", 00:12:49.265 "raid_level": "concat", 00:12:49.265 "superblock": false, 00:12:49.265 "num_base_bdevs": 2, 00:12:49.265 "num_base_bdevs_discovered": 1, 00:12:49.265 "num_base_bdevs_operational": 2, 00:12:49.265 "base_bdevs_list": [ 00:12:49.265 { 00:12:49.265 "name": "BaseBdev1", 00:12:49.265 "uuid": "e841497c-40d3-4d59-b97d-265a6e7a6b10", 00:12:49.265 "is_configured": true, 00:12:49.265 "data_offset": 0, 00:12:49.265 "data_size": 65536 00:12:49.265 }, 00:12:49.265 { 00:12:49.265 "name": "BaseBdev2", 00:12:49.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.265 "is_configured": false, 00:12:49.265 "data_offset": 0, 00:12:49.265 "data_size": 0 00:12:49.265 } 00:12:49.265 ] 00:12:49.265 }' 00:12:49.265 09:05:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.265 09:05:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.833 [2024-11-06 09:05:26.122470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.833 [2024-11-06 09:05:26.122704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:49.833 [2024-11-06 09:05:26.122727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:49.833 [2024-11-06 09:05:26.123127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:49.833 [2024-11-06 09:05:26.123395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:49.833 [2024-11-06 09:05:26.123417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:49.833 [2024-11-06 09:05:26.123707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.833 BaseBdev2 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.833 [ 00:12:49.833 { 00:12:49.833 "name": "BaseBdev2", 00:12:49.833 "aliases": [ 00:12:49.833 "ef5fbee1-dd84-4d68-ad5a-79cc12e2d49c" 00:12:49.833 ], 00:12:49.833 "product_name": "Malloc disk", 00:12:49.833 "block_size": 512, 00:12:49.833 "num_blocks": 65536, 00:12:49.833 "uuid": "ef5fbee1-dd84-4d68-ad5a-79cc12e2d49c", 00:12:49.833 "assigned_rate_limits": { 00:12:49.833 "rw_ios_per_sec": 0, 00:12:49.833 "rw_mbytes_per_sec": 0, 00:12:49.833 "r_mbytes_per_sec": 0, 00:12:49.833 "w_mbytes_per_sec": 0 00:12:49.833 }, 00:12:49.833 "claimed": true, 00:12:49.833 "claim_type": "exclusive_write", 00:12:49.833 "zoned": false, 00:12:49.833 "supported_io_types": { 00:12:49.833 "read": true, 00:12:49.833 "write": true, 00:12:49.833 "unmap": true, 00:12:49.833 "flush": true, 00:12:49.833 "reset": true, 00:12:49.833 "nvme_admin": false, 00:12:49.833 "nvme_io": false, 00:12:49.833 "nvme_io_md": false, 00:12:49.833 "write_zeroes": true, 00:12:49.833 "zcopy": true, 00:12:49.833 "get_zone_info": false, 00:12:49.833 "zone_management": false, 00:12:49.833 "zone_append": false, 00:12:49.833 "compare": false, 00:12:49.833 "compare_and_write": false, 00:12:49.833 "abort": true, 00:12:49.833 "seek_hole": false, 00:12:49.833 "seek_data": false, 00:12:49.833 "copy": true, 00:12:49.833 "nvme_iov_md": false 00:12:49.833 }, 00:12:49.833 "memory_domains": [ 00:12:49.833 { 00:12:49.833 "dma_device_id": "system", 00:12:49.833 "dma_device_type": 1 00:12:49.833 }, 00:12:49.833 { 00:12:49.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.833 "dma_device_type": 2 00:12:49.833 } 00:12:49.833 ], 00:12:49.833 "driver_specific": {} 00:12:49.833 } 00:12:49.833 ] 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.833 "name": "Existed_Raid", 00:12:49.833 "uuid": "7f462212-4fa8-4ae6-8b81-b354c140b3dc", 00:12:49.833 "strip_size_kb": 64, 00:12:49.833 "state": "online", 00:12:49.833 "raid_level": "concat", 00:12:49.833 "superblock": false, 00:12:49.833 "num_base_bdevs": 2, 00:12:49.833 "num_base_bdevs_discovered": 2, 00:12:49.833 "num_base_bdevs_operational": 2, 00:12:49.833 "base_bdevs_list": [ 00:12:49.833 { 00:12:49.833 "name": "BaseBdev1", 00:12:49.833 "uuid": "e841497c-40d3-4d59-b97d-265a6e7a6b10", 00:12:49.833 "is_configured": true, 00:12:49.833 "data_offset": 0, 00:12:49.833 "data_size": 65536 00:12:49.833 }, 00:12:49.833 { 00:12:49.833 "name": "BaseBdev2", 00:12:49.833 "uuid": "ef5fbee1-dd84-4d68-ad5a-79cc12e2d49c", 00:12:49.833 "is_configured": true, 00:12:49.833 "data_offset": 0, 00:12:49.833 "data_size": 65536 00:12:49.833 } 00:12:49.833 ] 00:12:49.833 }' 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.833 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.400 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:50.400 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:50.400 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:50.400 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:50.400 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:50.400 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:50.400 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:50.400 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.400 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.400 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:50.400 [2024-11-06 09:05:26.687016] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.400 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.400 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:50.400 "name": "Existed_Raid", 00:12:50.400 "aliases": [ 00:12:50.400 "7f462212-4fa8-4ae6-8b81-b354c140b3dc" 00:12:50.400 ], 00:12:50.400 "product_name": "Raid Volume", 00:12:50.400 "block_size": 512, 00:12:50.400 "num_blocks": 131072, 00:12:50.400 "uuid": "7f462212-4fa8-4ae6-8b81-b354c140b3dc", 00:12:50.400 "assigned_rate_limits": { 00:12:50.400 "rw_ios_per_sec": 0, 00:12:50.400 "rw_mbytes_per_sec": 0, 00:12:50.400 "r_mbytes_per_sec": 0, 00:12:50.400 "w_mbytes_per_sec": 0 00:12:50.400 }, 00:12:50.400 "claimed": false, 00:12:50.400 "zoned": false, 00:12:50.400 "supported_io_types": { 00:12:50.400 "read": true, 00:12:50.400 "write": true, 00:12:50.400 "unmap": true, 00:12:50.400 "flush": true, 00:12:50.400 "reset": true, 00:12:50.400 "nvme_admin": false, 00:12:50.400 "nvme_io": false, 00:12:50.400 "nvme_io_md": false, 00:12:50.400 "write_zeroes": true, 00:12:50.400 "zcopy": false, 00:12:50.400 "get_zone_info": false, 00:12:50.400 "zone_management": false, 00:12:50.400 "zone_append": false, 00:12:50.400 "compare": false, 00:12:50.400 "compare_and_write": false, 00:12:50.400 "abort": false, 00:12:50.400 "seek_hole": false, 00:12:50.400 "seek_data": false, 00:12:50.400 "copy": false, 00:12:50.400 "nvme_iov_md": false 00:12:50.400 }, 00:12:50.400 "memory_domains": [ 00:12:50.400 { 00:12:50.400 "dma_device_id": "system", 00:12:50.401 "dma_device_type": 1 00:12:50.401 }, 00:12:50.401 { 00:12:50.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.401 "dma_device_type": 2 00:12:50.401 }, 00:12:50.401 { 00:12:50.401 "dma_device_id": "system", 00:12:50.401 "dma_device_type": 1 00:12:50.401 }, 00:12:50.401 { 00:12:50.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.401 "dma_device_type": 2 00:12:50.401 } 00:12:50.401 ], 00:12:50.401 "driver_specific": { 00:12:50.401 "raid": { 00:12:50.401 "uuid": "7f462212-4fa8-4ae6-8b81-b354c140b3dc", 00:12:50.401 "strip_size_kb": 64, 00:12:50.401 "state": "online", 00:12:50.401 "raid_level": "concat", 00:12:50.401 "superblock": false, 00:12:50.401 "num_base_bdevs": 2, 00:12:50.401 "num_base_bdevs_discovered": 2, 00:12:50.401 "num_base_bdevs_operational": 2, 00:12:50.401 "base_bdevs_list": [ 00:12:50.401 { 00:12:50.401 "name": "BaseBdev1", 00:12:50.401 "uuid": "e841497c-40d3-4d59-b97d-265a6e7a6b10", 00:12:50.401 "is_configured": true, 00:12:50.401 "data_offset": 0, 00:12:50.401 "data_size": 65536 00:12:50.401 }, 00:12:50.401 { 00:12:50.401 "name": "BaseBdev2", 00:12:50.401 "uuid": "ef5fbee1-dd84-4d68-ad5a-79cc12e2d49c", 00:12:50.401 "is_configured": true, 00:12:50.401 "data_offset": 0, 00:12:50.401 "data_size": 65536 00:12:50.401 } 00:12:50.401 ] 00:12:50.401 } 00:12:50.401 } 00:12:50.401 }' 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:50.401 BaseBdev2' 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.401 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.711 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.711 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.711 09:05:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:50.711 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.711 09:05:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.711 [2024-11-06 09:05:26.954841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.711 [2024-11-06 09:05:26.955014] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.711 [2024-11-06 09:05:26.955130] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.711 "name": "Existed_Raid", 00:12:50.711 "uuid": "7f462212-4fa8-4ae6-8b81-b354c140b3dc", 00:12:50.711 "strip_size_kb": 64, 00:12:50.711 "state": "offline", 00:12:50.711 "raid_level": "concat", 00:12:50.711 "superblock": false, 00:12:50.711 "num_base_bdevs": 2, 00:12:50.711 "num_base_bdevs_discovered": 1, 00:12:50.711 "num_base_bdevs_operational": 1, 00:12:50.711 "base_bdevs_list": [ 00:12:50.711 { 00:12:50.711 "name": null, 00:12:50.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.711 "is_configured": false, 00:12:50.711 "data_offset": 0, 00:12:50.711 "data_size": 65536 00:12:50.711 }, 00:12:50.711 { 00:12:50.711 "name": "BaseBdev2", 00:12:50.711 "uuid": "ef5fbee1-dd84-4d68-ad5a-79cc12e2d49c", 00:12:50.711 "is_configured": true, 00:12:50.711 "data_offset": 0, 00:12:50.711 "data_size": 65536 00:12:50.711 } 00:12:50.711 ] 00:12:50.711 }' 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.711 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.278 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:51.278 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.278 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.278 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:51.278 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.278 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.278 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.278 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:51.278 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:51.278 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:51.278 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.278 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.278 [2024-11-06 09:05:27.606466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:51.279 [2024-11-06 09:05:27.606689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61579 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 61579 ']' 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 61579 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61579 00:12:51.279 killing process with pid 61579 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61579' 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 61579 00:12:51.279 09:05:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 61579 00:12:51.279 [2024-11-06 09:05:27.780694] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.279 [2024-11-06 09:05:27.796138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.656 ************************************ 00:12:52.656 END TEST raid_state_function_test 00:12:52.656 ************************************ 00:12:52.656 09:05:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:52.656 00:12:52.656 real 0m5.390s 00:12:52.656 user 0m8.235s 00:12:52.656 sys 0m0.715s 00:12:52.656 09:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:52.656 09:05:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.657 09:05:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:12:52.657 09:05:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:52.657 09:05:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:52.657 09:05:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.657 ************************************ 00:12:52.657 START TEST raid_state_function_test_sb 00:12:52.657 ************************************ 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:52.657 Process raid pid: 61838 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61838 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61838' 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61838 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 61838 ']' 00:12:52.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:52.657 09:05:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.657 [2024-11-06 09:05:28.932572] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:52.657 [2024-11-06 09:05:28.932755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.657 [2024-11-06 09:05:29.119321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.915 [2024-11-06 09:05:29.245510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.173 [2024-11-06 09:05:29.453470] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.173 [2024-11-06 09:05:29.453534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.431 [2024-11-06 09:05:29.914600] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:53.431 [2024-11-06 09:05:29.914799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:53.431 [2024-11-06 09:05:29.914850] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.431 [2024-11-06 09:05:29.914874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.431 09:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.690 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.690 "name": "Existed_Raid", 00:12:53.690 "uuid": "3032f606-ac90-4897-a183-fd03b6fe11f8", 00:12:53.690 "strip_size_kb": 64, 00:12:53.690 "state": "configuring", 00:12:53.690 "raid_level": "concat", 00:12:53.690 "superblock": true, 00:12:53.690 "num_base_bdevs": 2, 00:12:53.690 "num_base_bdevs_discovered": 0, 00:12:53.690 "num_base_bdevs_operational": 2, 00:12:53.690 "base_bdevs_list": [ 00:12:53.690 { 00:12:53.690 "name": "BaseBdev1", 00:12:53.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.690 "is_configured": false, 00:12:53.690 "data_offset": 0, 00:12:53.690 "data_size": 0 00:12:53.690 }, 00:12:53.690 { 00:12:53.690 "name": "BaseBdev2", 00:12:53.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.690 "is_configured": false, 00:12:53.690 "data_offset": 0, 00:12:53.690 "data_size": 0 00:12:53.690 } 00:12:53.690 ] 00:12:53.690 }' 00:12:53.690 09:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.690 09:05:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.950 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:53.950 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.950 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.950 [2024-11-06 09:05:30.470685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:53.950 [2024-11-06 09:05:30.470911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:53.950 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.950 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:53.950 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.950 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.950 [2024-11-06 09:05:30.478681] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:53.950 [2024-11-06 09:05:30.478903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:53.950 [2024-11-06 09:05:30.478932] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.950 [2024-11-06 09:05:30.478955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.209 [2024-11-06 09:05:30.522733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.209 BaseBdev1 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.209 [ 00:12:54.209 { 00:12:54.209 "name": "BaseBdev1", 00:12:54.209 "aliases": [ 00:12:54.209 "0bdf706a-247c-49cd-8aad-0940111fa855" 00:12:54.209 ], 00:12:54.209 "product_name": "Malloc disk", 00:12:54.209 "block_size": 512, 00:12:54.209 "num_blocks": 65536, 00:12:54.209 "uuid": "0bdf706a-247c-49cd-8aad-0940111fa855", 00:12:54.209 "assigned_rate_limits": { 00:12:54.209 "rw_ios_per_sec": 0, 00:12:54.209 "rw_mbytes_per_sec": 0, 00:12:54.209 "r_mbytes_per_sec": 0, 00:12:54.209 "w_mbytes_per_sec": 0 00:12:54.209 }, 00:12:54.209 "claimed": true, 00:12:54.209 "claim_type": "exclusive_write", 00:12:54.209 "zoned": false, 00:12:54.209 "supported_io_types": { 00:12:54.209 "read": true, 00:12:54.209 "write": true, 00:12:54.209 "unmap": true, 00:12:54.209 "flush": true, 00:12:54.209 "reset": true, 00:12:54.209 "nvme_admin": false, 00:12:54.209 "nvme_io": false, 00:12:54.209 "nvme_io_md": false, 00:12:54.209 "write_zeroes": true, 00:12:54.209 "zcopy": true, 00:12:54.209 "get_zone_info": false, 00:12:54.209 "zone_management": false, 00:12:54.209 "zone_append": false, 00:12:54.209 "compare": false, 00:12:54.209 "compare_and_write": false, 00:12:54.209 "abort": true, 00:12:54.209 "seek_hole": false, 00:12:54.209 "seek_data": false, 00:12:54.209 "copy": true, 00:12:54.209 "nvme_iov_md": false 00:12:54.209 }, 00:12:54.209 "memory_domains": [ 00:12:54.209 { 00:12:54.209 "dma_device_id": "system", 00:12:54.209 "dma_device_type": 1 00:12:54.209 }, 00:12:54.209 { 00:12:54.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.209 "dma_device_type": 2 00:12:54.209 } 00:12:54.209 ], 00:12:54.209 "driver_specific": {} 00:12:54.209 } 00:12:54.209 ] 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.209 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.210 "name": "Existed_Raid", 00:12:54.210 "uuid": "ed7c0ba2-1684-4880-a6ca-66c170e6ea2d", 00:12:54.210 "strip_size_kb": 64, 00:12:54.210 "state": "configuring", 00:12:54.210 "raid_level": "concat", 00:12:54.210 "superblock": true, 00:12:54.210 "num_base_bdevs": 2, 00:12:54.210 "num_base_bdevs_discovered": 1, 00:12:54.210 "num_base_bdevs_operational": 2, 00:12:54.210 "base_bdevs_list": [ 00:12:54.210 { 00:12:54.210 "name": "BaseBdev1", 00:12:54.210 "uuid": "0bdf706a-247c-49cd-8aad-0940111fa855", 00:12:54.210 "is_configured": true, 00:12:54.210 "data_offset": 2048, 00:12:54.210 "data_size": 63488 00:12:54.210 }, 00:12:54.210 { 00:12:54.210 "name": "BaseBdev2", 00:12:54.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.210 "is_configured": false, 00:12:54.210 "data_offset": 0, 00:12:54.210 "data_size": 0 00:12:54.210 } 00:12:54.210 ] 00:12:54.210 }' 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.210 09:05:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.776 [2024-11-06 09:05:31.058979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:54.776 [2024-11-06 09:05:31.059063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.776 [2024-11-06 09:05:31.067071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:54.776 [2024-11-06 09:05:31.069574] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:54.776 [2024-11-06 09:05:31.069639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.776 "name": "Existed_Raid", 00:12:54.776 "uuid": "cd3c02a1-f5ef-4f8b-9747-5f9c1a312d16", 00:12:54.776 "strip_size_kb": 64, 00:12:54.776 "state": "configuring", 00:12:54.776 "raid_level": "concat", 00:12:54.776 "superblock": true, 00:12:54.776 "num_base_bdevs": 2, 00:12:54.776 "num_base_bdevs_discovered": 1, 00:12:54.776 "num_base_bdevs_operational": 2, 00:12:54.776 "base_bdevs_list": [ 00:12:54.776 { 00:12:54.776 "name": "BaseBdev1", 00:12:54.776 "uuid": "0bdf706a-247c-49cd-8aad-0940111fa855", 00:12:54.776 "is_configured": true, 00:12:54.776 "data_offset": 2048, 00:12:54.776 "data_size": 63488 00:12:54.776 }, 00:12:54.776 { 00:12:54.776 "name": "BaseBdev2", 00:12:54.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.776 "is_configured": false, 00:12:54.776 "data_offset": 0, 00:12:54.776 "data_size": 0 00:12:54.776 } 00:12:54.776 ] 00:12:54.776 }' 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.776 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.343 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:55.343 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.343 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.343 [2024-11-06 09:05:31.620072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.343 [2024-11-06 09:05:31.620409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:55.343 [2024-11-06 09:05:31.620429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:55.343 [2024-11-06 09:05:31.620746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:55.343 BaseBdev2 00:12:55.343 [2024-11-06 09:05:31.620944] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:55.344 [2024-11-06 09:05:31.620965] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:55.344 [2024-11-06 09:05:31.621189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.344 [ 00:12:55.344 { 00:12:55.344 "name": "BaseBdev2", 00:12:55.344 "aliases": [ 00:12:55.344 "34ccbdd2-60eb-48e7-b528-11b7d51c8eec" 00:12:55.344 ], 00:12:55.344 "product_name": "Malloc disk", 00:12:55.344 "block_size": 512, 00:12:55.344 "num_blocks": 65536, 00:12:55.344 "uuid": "34ccbdd2-60eb-48e7-b528-11b7d51c8eec", 00:12:55.344 "assigned_rate_limits": { 00:12:55.344 "rw_ios_per_sec": 0, 00:12:55.344 "rw_mbytes_per_sec": 0, 00:12:55.344 "r_mbytes_per_sec": 0, 00:12:55.344 "w_mbytes_per_sec": 0 00:12:55.344 }, 00:12:55.344 "claimed": true, 00:12:55.344 "claim_type": "exclusive_write", 00:12:55.344 "zoned": false, 00:12:55.344 "supported_io_types": { 00:12:55.344 "read": true, 00:12:55.344 "write": true, 00:12:55.344 "unmap": true, 00:12:55.344 "flush": true, 00:12:55.344 "reset": true, 00:12:55.344 "nvme_admin": false, 00:12:55.344 "nvme_io": false, 00:12:55.344 "nvme_io_md": false, 00:12:55.344 "write_zeroes": true, 00:12:55.344 "zcopy": true, 00:12:55.344 "get_zone_info": false, 00:12:55.344 "zone_management": false, 00:12:55.344 "zone_append": false, 00:12:55.344 "compare": false, 00:12:55.344 "compare_and_write": false, 00:12:55.344 "abort": true, 00:12:55.344 "seek_hole": false, 00:12:55.344 "seek_data": false, 00:12:55.344 "copy": true, 00:12:55.344 "nvme_iov_md": false 00:12:55.344 }, 00:12:55.344 "memory_domains": [ 00:12:55.344 { 00:12:55.344 "dma_device_id": "system", 00:12:55.344 "dma_device_type": 1 00:12:55.344 }, 00:12:55.344 { 00:12:55.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.344 "dma_device_type": 2 00:12:55.344 } 00:12:55.344 ], 00:12:55.344 "driver_specific": {} 00:12:55.344 } 00:12:55.344 ] 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.344 "name": "Existed_Raid", 00:12:55.344 "uuid": "cd3c02a1-f5ef-4f8b-9747-5f9c1a312d16", 00:12:55.344 "strip_size_kb": 64, 00:12:55.344 "state": "online", 00:12:55.344 "raid_level": "concat", 00:12:55.344 "superblock": true, 00:12:55.344 "num_base_bdevs": 2, 00:12:55.344 "num_base_bdevs_discovered": 2, 00:12:55.344 "num_base_bdevs_operational": 2, 00:12:55.344 "base_bdevs_list": [ 00:12:55.344 { 00:12:55.344 "name": "BaseBdev1", 00:12:55.344 "uuid": "0bdf706a-247c-49cd-8aad-0940111fa855", 00:12:55.344 "is_configured": true, 00:12:55.344 "data_offset": 2048, 00:12:55.344 "data_size": 63488 00:12:55.344 }, 00:12:55.344 { 00:12:55.344 "name": "BaseBdev2", 00:12:55.344 "uuid": "34ccbdd2-60eb-48e7-b528-11b7d51c8eec", 00:12:55.344 "is_configured": true, 00:12:55.344 "data_offset": 2048, 00:12:55.344 "data_size": 63488 00:12:55.344 } 00:12:55.344 ] 00:12:55.344 }' 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.344 09:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.912 [2024-11-06 09:05:32.160722] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.912 "name": "Existed_Raid", 00:12:55.912 "aliases": [ 00:12:55.912 "cd3c02a1-f5ef-4f8b-9747-5f9c1a312d16" 00:12:55.912 ], 00:12:55.912 "product_name": "Raid Volume", 00:12:55.912 "block_size": 512, 00:12:55.912 "num_blocks": 126976, 00:12:55.912 "uuid": "cd3c02a1-f5ef-4f8b-9747-5f9c1a312d16", 00:12:55.912 "assigned_rate_limits": { 00:12:55.912 "rw_ios_per_sec": 0, 00:12:55.912 "rw_mbytes_per_sec": 0, 00:12:55.912 "r_mbytes_per_sec": 0, 00:12:55.912 "w_mbytes_per_sec": 0 00:12:55.912 }, 00:12:55.912 "claimed": false, 00:12:55.912 "zoned": false, 00:12:55.912 "supported_io_types": { 00:12:55.912 "read": true, 00:12:55.912 "write": true, 00:12:55.912 "unmap": true, 00:12:55.912 "flush": true, 00:12:55.912 "reset": true, 00:12:55.912 "nvme_admin": false, 00:12:55.912 "nvme_io": false, 00:12:55.912 "nvme_io_md": false, 00:12:55.912 "write_zeroes": true, 00:12:55.912 "zcopy": false, 00:12:55.912 "get_zone_info": false, 00:12:55.912 "zone_management": false, 00:12:55.912 "zone_append": false, 00:12:55.912 "compare": false, 00:12:55.912 "compare_and_write": false, 00:12:55.912 "abort": false, 00:12:55.912 "seek_hole": false, 00:12:55.912 "seek_data": false, 00:12:55.912 "copy": false, 00:12:55.912 "nvme_iov_md": false 00:12:55.912 }, 00:12:55.912 "memory_domains": [ 00:12:55.912 { 00:12:55.912 "dma_device_id": "system", 00:12:55.912 "dma_device_type": 1 00:12:55.912 }, 00:12:55.912 { 00:12:55.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.912 "dma_device_type": 2 00:12:55.912 }, 00:12:55.912 { 00:12:55.912 "dma_device_id": "system", 00:12:55.912 "dma_device_type": 1 00:12:55.912 }, 00:12:55.912 { 00:12:55.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.912 "dma_device_type": 2 00:12:55.912 } 00:12:55.912 ], 00:12:55.912 "driver_specific": { 00:12:55.912 "raid": { 00:12:55.912 "uuid": "cd3c02a1-f5ef-4f8b-9747-5f9c1a312d16", 00:12:55.912 "strip_size_kb": 64, 00:12:55.912 "state": "online", 00:12:55.912 "raid_level": "concat", 00:12:55.912 "superblock": true, 00:12:55.912 "num_base_bdevs": 2, 00:12:55.912 "num_base_bdevs_discovered": 2, 00:12:55.912 "num_base_bdevs_operational": 2, 00:12:55.912 "base_bdevs_list": [ 00:12:55.912 { 00:12:55.912 "name": "BaseBdev1", 00:12:55.912 "uuid": "0bdf706a-247c-49cd-8aad-0940111fa855", 00:12:55.912 "is_configured": true, 00:12:55.912 "data_offset": 2048, 00:12:55.912 "data_size": 63488 00:12:55.912 }, 00:12:55.912 { 00:12:55.912 "name": "BaseBdev2", 00:12:55.912 "uuid": "34ccbdd2-60eb-48e7-b528-11b7d51c8eec", 00:12:55.912 "is_configured": true, 00:12:55.912 "data_offset": 2048, 00:12:55.912 "data_size": 63488 00:12:55.912 } 00:12:55.912 ] 00:12:55.912 } 00:12:55.912 } 00:12:55.912 }' 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:55.912 BaseBdev2' 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.912 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.912 [2024-11-06 09:05:32.428455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:55.912 [2024-11-06 09:05:32.428511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.912 [2024-11-06 09:05:32.428573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.173 "name": "Existed_Raid", 00:12:56.173 "uuid": "cd3c02a1-f5ef-4f8b-9747-5f9c1a312d16", 00:12:56.173 "strip_size_kb": 64, 00:12:56.173 "state": "offline", 00:12:56.173 "raid_level": "concat", 00:12:56.173 "superblock": true, 00:12:56.173 "num_base_bdevs": 2, 00:12:56.173 "num_base_bdevs_discovered": 1, 00:12:56.173 "num_base_bdevs_operational": 1, 00:12:56.173 "base_bdevs_list": [ 00:12:56.173 { 00:12:56.173 "name": null, 00:12:56.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.173 "is_configured": false, 00:12:56.173 "data_offset": 0, 00:12:56.173 "data_size": 63488 00:12:56.173 }, 00:12:56.173 { 00:12:56.173 "name": "BaseBdev2", 00:12:56.173 "uuid": "34ccbdd2-60eb-48e7-b528-11b7d51c8eec", 00:12:56.173 "is_configured": true, 00:12:56.173 "data_offset": 2048, 00:12:56.173 "data_size": 63488 00:12:56.173 } 00:12:56.173 ] 00:12:56.173 }' 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.173 09:05:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.740 [2024-11-06 09:05:33.079309] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:56.740 [2024-11-06 09:05:33.079388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61838 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 61838 ']' 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 61838 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61838 00:12:56.740 killing process with pid 61838 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61838' 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 61838 00:12:56.740 [2024-11-06 09:05:33.250706] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.740 09:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 61838 00:12:56.740 [2024-11-06 09:05:33.265279] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:58.116 ************************************ 00:12:58.116 END TEST raid_state_function_test_sb 00:12:58.116 ************************************ 00:12:58.116 09:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:58.116 00:12:58.116 real 0m5.462s 00:12:58.116 user 0m8.319s 00:12:58.116 sys 0m0.741s 00:12:58.116 09:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:58.116 09:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.116 09:05:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:12:58.116 09:05:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:58.116 09:05:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:58.116 09:05:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:58.116 ************************************ 00:12:58.116 START TEST raid_superblock_test 00:12:58.116 ************************************ 00:12:58.116 09:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:12:58.116 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:58.116 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:12:58.116 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:58.116 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:58.116 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:58.116 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:58.116 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:58.116 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:58.116 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:58.116 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:58.116 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62090 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62090 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 62090 ']' 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:58.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:58.117 09:05:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.117 [2024-11-06 09:05:34.446959] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:12:58.117 [2024-11-06 09:05:34.447137] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62090 ] 00:12:58.117 [2024-11-06 09:05:34.638624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.375 [2024-11-06 09:05:34.767326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.633 [2024-11-06 09:05:34.971774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.633 [2024-11-06 09:05:34.971826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.892 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.151 malloc1 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.151 [2024-11-06 09:05:35.475699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:59.151 [2024-11-06 09:05:35.475940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.151 [2024-11-06 09:05:35.476025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:59.151 [2024-11-06 09:05:35.476279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.151 [2024-11-06 09:05:35.479156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.151 [2024-11-06 09:05:35.479366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:59.151 pt1 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.151 malloc2 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.151 [2024-11-06 09:05:35.532496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:59.151 [2024-11-06 09:05:35.532577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.151 [2024-11-06 09:05:35.532608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:59.151 [2024-11-06 09:05:35.532622] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.151 [2024-11-06 09:05:35.535500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.151 [2024-11-06 09:05:35.535543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:59.151 pt2 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.151 [2024-11-06 09:05:35.544575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:59.151 [2024-11-06 09:05:35.547084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:59.151 [2024-11-06 09:05:35.547292] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:59.151 [2024-11-06 09:05:35.547311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:59.151 [2024-11-06 09:05:35.547587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:12:59.151 [2024-11-06 09:05:35.547784] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:59.151 [2024-11-06 09:05:35.547805] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:59.151 [2024-11-06 09:05:35.548013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.151 "name": "raid_bdev1", 00:12:59.151 "uuid": "74ee8a21-66b8-4b34-99f0-4c047cabee25", 00:12:59.151 "strip_size_kb": 64, 00:12:59.151 "state": "online", 00:12:59.151 "raid_level": "concat", 00:12:59.151 "superblock": true, 00:12:59.151 "num_base_bdevs": 2, 00:12:59.151 "num_base_bdevs_discovered": 2, 00:12:59.151 "num_base_bdevs_operational": 2, 00:12:59.151 "base_bdevs_list": [ 00:12:59.151 { 00:12:59.151 "name": "pt1", 00:12:59.151 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.151 "is_configured": true, 00:12:59.151 "data_offset": 2048, 00:12:59.151 "data_size": 63488 00:12:59.151 }, 00:12:59.151 { 00:12:59.151 "name": "pt2", 00:12:59.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.151 "is_configured": true, 00:12:59.151 "data_offset": 2048, 00:12:59.151 "data_size": 63488 00:12:59.151 } 00:12:59.151 ] 00:12:59.151 }' 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.151 09:05:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.719 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:59.719 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:59.719 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.719 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.719 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.719 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.719 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.719 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.719 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.719 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.719 [2024-11-06 09:05:36.061036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.719 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.719 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.719 "name": "raid_bdev1", 00:12:59.719 "aliases": [ 00:12:59.719 "74ee8a21-66b8-4b34-99f0-4c047cabee25" 00:12:59.719 ], 00:12:59.719 "product_name": "Raid Volume", 00:12:59.719 "block_size": 512, 00:12:59.719 "num_blocks": 126976, 00:12:59.719 "uuid": "74ee8a21-66b8-4b34-99f0-4c047cabee25", 00:12:59.719 "assigned_rate_limits": { 00:12:59.719 "rw_ios_per_sec": 0, 00:12:59.719 "rw_mbytes_per_sec": 0, 00:12:59.719 "r_mbytes_per_sec": 0, 00:12:59.719 "w_mbytes_per_sec": 0 00:12:59.719 }, 00:12:59.719 "claimed": false, 00:12:59.719 "zoned": false, 00:12:59.719 "supported_io_types": { 00:12:59.719 "read": true, 00:12:59.719 "write": true, 00:12:59.719 "unmap": true, 00:12:59.719 "flush": true, 00:12:59.719 "reset": true, 00:12:59.719 "nvme_admin": false, 00:12:59.719 "nvme_io": false, 00:12:59.719 "nvme_io_md": false, 00:12:59.719 "write_zeroes": true, 00:12:59.719 "zcopy": false, 00:12:59.719 "get_zone_info": false, 00:12:59.719 "zone_management": false, 00:12:59.719 "zone_append": false, 00:12:59.719 "compare": false, 00:12:59.719 "compare_and_write": false, 00:12:59.719 "abort": false, 00:12:59.719 "seek_hole": false, 00:12:59.719 "seek_data": false, 00:12:59.719 "copy": false, 00:12:59.719 "nvme_iov_md": false 00:12:59.719 }, 00:12:59.719 "memory_domains": [ 00:12:59.719 { 00:12:59.719 "dma_device_id": "system", 00:12:59.719 "dma_device_type": 1 00:12:59.719 }, 00:12:59.719 { 00:12:59.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.719 "dma_device_type": 2 00:12:59.719 }, 00:12:59.719 { 00:12:59.719 "dma_device_id": "system", 00:12:59.719 "dma_device_type": 1 00:12:59.719 }, 00:12:59.719 { 00:12:59.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.719 "dma_device_type": 2 00:12:59.719 } 00:12:59.719 ], 00:12:59.719 "driver_specific": { 00:12:59.719 "raid": { 00:12:59.719 "uuid": "74ee8a21-66b8-4b34-99f0-4c047cabee25", 00:12:59.719 "strip_size_kb": 64, 00:12:59.719 "state": "online", 00:12:59.719 "raid_level": "concat", 00:12:59.719 "superblock": true, 00:12:59.719 "num_base_bdevs": 2, 00:12:59.719 "num_base_bdevs_discovered": 2, 00:12:59.719 "num_base_bdevs_operational": 2, 00:12:59.719 "base_bdevs_list": [ 00:12:59.719 { 00:12:59.719 "name": "pt1", 00:12:59.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.719 "is_configured": true, 00:12:59.719 "data_offset": 2048, 00:12:59.719 "data_size": 63488 00:12:59.719 }, 00:12:59.719 { 00:12:59.719 "name": "pt2", 00:12:59.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.719 "is_configured": true, 00:12:59.719 "data_offset": 2048, 00:12:59.719 "data_size": 63488 00:12:59.719 } 00:12:59.720 ] 00:12:59.720 } 00:12:59.720 } 00:12:59.720 }' 00:12:59.720 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.720 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:59.720 pt2' 00:12:59.720 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.720 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.720 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.720 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:59.720 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.720 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.720 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.720 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.979 [2024-11-06 09:05:36.321143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=74ee8a21-66b8-4b34-99f0-4c047cabee25 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 74ee8a21-66b8-4b34-99f0-4c047cabee25 ']' 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.979 [2024-11-06 09:05:36.372704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:59.979 [2024-11-06 09:05:36.372881] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.979 [2024-11-06 09:05:36.373087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.979 [2024-11-06 09:05:36.373289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.979 [2024-11-06 09:05:36.373444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.979 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.248 [2024-11-06 09:05:36.516818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:00.248 [2024-11-06 09:05:36.519447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:00.248 [2024-11-06 09:05:36.519531] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:00.248 [2024-11-06 09:05:36.519623] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:00.248 [2024-11-06 09:05:36.519649] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.248 [2024-11-06 09:05:36.519664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:00.248 request: 00:13:00.248 { 00:13:00.248 "name": "raid_bdev1", 00:13:00.248 "raid_level": "concat", 00:13:00.248 "base_bdevs": [ 00:13:00.248 "malloc1", 00:13:00.248 "malloc2" 00:13:00.248 ], 00:13:00.248 "strip_size_kb": 64, 00:13:00.248 "superblock": false, 00:13:00.248 "method": "bdev_raid_create", 00:13:00.248 "req_id": 1 00:13:00.248 } 00:13:00.248 Got JSON-RPC error response 00:13:00.248 response: 00:13:00.248 { 00:13:00.248 "code": -17, 00:13:00.248 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:00.248 } 00:13:00.248 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:00.248 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:00.248 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:00.248 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:00.248 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:00.248 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.248 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:00.248 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.248 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.249 [2024-11-06 09:05:36.580765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:00.249 [2024-11-06 09:05:36.580993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.249 [2024-11-06 09:05:36.581066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:00.249 [2024-11-06 09:05:36.581274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.249 [2024-11-06 09:05:36.584203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.249 [2024-11-06 09:05:36.584412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:00.249 [2024-11-06 09:05:36.584611] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:00.249 [2024-11-06 09:05:36.584794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:00.249 pt1 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.249 "name": "raid_bdev1", 00:13:00.249 "uuid": "74ee8a21-66b8-4b34-99f0-4c047cabee25", 00:13:00.249 "strip_size_kb": 64, 00:13:00.249 "state": "configuring", 00:13:00.249 "raid_level": "concat", 00:13:00.249 "superblock": true, 00:13:00.249 "num_base_bdevs": 2, 00:13:00.249 "num_base_bdevs_discovered": 1, 00:13:00.249 "num_base_bdevs_operational": 2, 00:13:00.249 "base_bdevs_list": [ 00:13:00.249 { 00:13:00.249 "name": "pt1", 00:13:00.249 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:00.249 "is_configured": true, 00:13:00.249 "data_offset": 2048, 00:13:00.249 "data_size": 63488 00:13:00.249 }, 00:13:00.249 { 00:13:00.249 "name": null, 00:13:00.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.249 "is_configured": false, 00:13:00.249 "data_offset": 2048, 00:13:00.249 "data_size": 63488 00:13:00.249 } 00:13:00.249 ] 00:13:00.249 }' 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.249 09:05:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.817 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:00.817 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:00.817 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:00.817 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:00.817 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.817 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.817 [2024-11-06 09:05:37.125382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:00.817 [2024-11-06 09:05:37.125482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.817 [2024-11-06 09:05:37.125527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:00.817 [2024-11-06 09:05:37.125544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.817 [2024-11-06 09:05:37.126171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.817 [2024-11-06 09:05:37.126253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:00.817 [2024-11-06 09:05:37.126358] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:00.817 [2024-11-06 09:05:37.126397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.817 [2024-11-06 09:05:37.126547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:00.817 [2024-11-06 09:05:37.126567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:00.817 [2024-11-06 09:05:37.126890] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:00.817 [2024-11-06 09:05:37.127085] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:00.817 [2024-11-06 09:05:37.127101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:00.817 [2024-11-06 09:05:37.127291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.817 pt2 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.818 "name": "raid_bdev1", 00:13:00.818 "uuid": "74ee8a21-66b8-4b34-99f0-4c047cabee25", 00:13:00.818 "strip_size_kb": 64, 00:13:00.818 "state": "online", 00:13:00.818 "raid_level": "concat", 00:13:00.818 "superblock": true, 00:13:00.818 "num_base_bdevs": 2, 00:13:00.818 "num_base_bdevs_discovered": 2, 00:13:00.818 "num_base_bdevs_operational": 2, 00:13:00.818 "base_bdevs_list": [ 00:13:00.818 { 00:13:00.818 "name": "pt1", 00:13:00.818 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:00.818 "is_configured": true, 00:13:00.818 "data_offset": 2048, 00:13:00.818 "data_size": 63488 00:13:00.818 }, 00:13:00.818 { 00:13:00.818 "name": "pt2", 00:13:00.818 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.818 "is_configured": true, 00:13:00.818 "data_offset": 2048, 00:13:00.818 "data_size": 63488 00:13:00.818 } 00:13:00.818 ] 00:13:00.818 }' 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.818 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:01.386 [2024-11-06 09:05:37.665910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:01.386 "name": "raid_bdev1", 00:13:01.386 "aliases": [ 00:13:01.386 "74ee8a21-66b8-4b34-99f0-4c047cabee25" 00:13:01.386 ], 00:13:01.386 "product_name": "Raid Volume", 00:13:01.386 "block_size": 512, 00:13:01.386 "num_blocks": 126976, 00:13:01.386 "uuid": "74ee8a21-66b8-4b34-99f0-4c047cabee25", 00:13:01.386 "assigned_rate_limits": { 00:13:01.386 "rw_ios_per_sec": 0, 00:13:01.386 "rw_mbytes_per_sec": 0, 00:13:01.386 "r_mbytes_per_sec": 0, 00:13:01.386 "w_mbytes_per_sec": 0 00:13:01.386 }, 00:13:01.386 "claimed": false, 00:13:01.386 "zoned": false, 00:13:01.386 "supported_io_types": { 00:13:01.386 "read": true, 00:13:01.386 "write": true, 00:13:01.386 "unmap": true, 00:13:01.386 "flush": true, 00:13:01.386 "reset": true, 00:13:01.386 "nvme_admin": false, 00:13:01.386 "nvme_io": false, 00:13:01.386 "nvme_io_md": false, 00:13:01.386 "write_zeroes": true, 00:13:01.386 "zcopy": false, 00:13:01.386 "get_zone_info": false, 00:13:01.386 "zone_management": false, 00:13:01.386 "zone_append": false, 00:13:01.386 "compare": false, 00:13:01.386 "compare_and_write": false, 00:13:01.386 "abort": false, 00:13:01.386 "seek_hole": false, 00:13:01.386 "seek_data": false, 00:13:01.386 "copy": false, 00:13:01.386 "nvme_iov_md": false 00:13:01.386 }, 00:13:01.386 "memory_domains": [ 00:13:01.386 { 00:13:01.386 "dma_device_id": "system", 00:13:01.386 "dma_device_type": 1 00:13:01.386 }, 00:13:01.386 { 00:13:01.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.386 "dma_device_type": 2 00:13:01.386 }, 00:13:01.386 { 00:13:01.386 "dma_device_id": "system", 00:13:01.386 "dma_device_type": 1 00:13:01.386 }, 00:13:01.386 { 00:13:01.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.386 "dma_device_type": 2 00:13:01.386 } 00:13:01.386 ], 00:13:01.386 "driver_specific": { 00:13:01.386 "raid": { 00:13:01.386 "uuid": "74ee8a21-66b8-4b34-99f0-4c047cabee25", 00:13:01.386 "strip_size_kb": 64, 00:13:01.386 "state": "online", 00:13:01.386 "raid_level": "concat", 00:13:01.386 "superblock": true, 00:13:01.386 "num_base_bdevs": 2, 00:13:01.386 "num_base_bdevs_discovered": 2, 00:13:01.386 "num_base_bdevs_operational": 2, 00:13:01.386 "base_bdevs_list": [ 00:13:01.386 { 00:13:01.386 "name": "pt1", 00:13:01.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:01.386 "is_configured": true, 00:13:01.386 "data_offset": 2048, 00:13:01.386 "data_size": 63488 00:13:01.386 }, 00:13:01.386 { 00:13:01.386 "name": "pt2", 00:13:01.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.386 "is_configured": true, 00:13:01.386 "data_offset": 2048, 00:13:01.386 "data_size": 63488 00:13:01.386 } 00:13:01.386 ] 00:13:01.386 } 00:13:01.386 } 00:13:01.386 }' 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:01.386 pt2' 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.386 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.645 [2024-11-06 09:05:37.925935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 74ee8a21-66b8-4b34-99f0-4c047cabee25 '!=' 74ee8a21-66b8-4b34-99f0-4c047cabee25 ']' 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62090 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 62090 ']' 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 62090 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:01.645 09:05:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62090 00:13:01.645 killing process with pid 62090 00:13:01.645 09:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:01.645 09:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:01.645 09:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62090' 00:13:01.645 09:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 62090 00:13:01.645 [2024-11-06 09:05:38.002491] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:01.645 09:05:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 62090 00:13:01.645 [2024-11-06 09:05:38.002603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.645 [2024-11-06 09:05:38.002682] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.645 [2024-11-06 09:05:38.002734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:01.645 [2024-11-06 09:05:38.170818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.021 09:05:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:03.021 00:13:03.021 real 0m4.810s 00:13:03.021 user 0m7.147s 00:13:03.021 sys 0m0.688s 00:13:03.021 ************************************ 00:13:03.021 END TEST raid_superblock_test 00:13:03.021 ************************************ 00:13:03.021 09:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:03.021 09:05:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.021 09:05:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:13:03.021 09:05:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:03.021 09:05:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:03.021 09:05:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.021 ************************************ 00:13:03.021 START TEST raid_read_error_test 00:13:03.021 ************************************ 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pctJ48aMGl 00:13:03.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62307 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62307 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 62307 ']' 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:03.021 09:05:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.022 09:05:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:03.022 09:05:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.022 09:05:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:03.022 09:05:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.022 [2024-11-06 09:05:39.301456] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:13:03.022 [2024-11-06 09:05:39.301611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62307 ] 00:13:03.022 [2024-11-06 09:05:39.471760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.279 [2024-11-06 09:05:39.599231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.279 [2024-11-06 09:05:39.784328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.279 [2024-11-06 09:05:39.784402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.845 BaseBdev1_malloc 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.845 true 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.845 [2024-11-06 09:05:40.371728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:03.845 [2024-11-06 09:05:40.371808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.845 [2024-11-06 09:05:40.371870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:03.845 [2024-11-06 09:05:40.371891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.845 [2024-11-06 09:05:40.374875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.845 [2024-11-06 09:05:40.375081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:03.845 BaseBdev1 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.845 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.103 BaseBdev2_malloc 00:13:04.103 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.103 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:04.103 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.103 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.103 true 00:13:04.103 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.103 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:04.103 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.103 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.103 [2024-11-06 09:05:40.430863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:04.103 [2024-11-06 09:05:40.431096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.103 [2024-11-06 09:05:40.431135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:04.103 [2024-11-06 09:05:40.431154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.103 [2024-11-06 09:05:40.434200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.103 [2024-11-06 09:05:40.434420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:04.103 BaseBdev2 00:13:04.103 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.103 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:04.103 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.104 [2024-11-06 09:05:40.443149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.104 [2024-11-06 09:05:40.445849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.104 [2024-11-06 09:05:40.446250] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:04.104 [2024-11-06 09:05:40.446436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:04.104 [2024-11-06 09:05:40.446782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:04.104 [2024-11-06 09:05:40.447176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:04.104 [2024-11-06 09:05:40.447349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:04.104 [2024-11-06 09:05:40.447791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.104 "name": "raid_bdev1", 00:13:04.104 "uuid": "d9d623d2-924b-4b3d-ae3f-91df664261c9", 00:13:04.104 "strip_size_kb": 64, 00:13:04.104 "state": "online", 00:13:04.104 "raid_level": "concat", 00:13:04.104 "superblock": true, 00:13:04.104 "num_base_bdevs": 2, 00:13:04.104 "num_base_bdevs_discovered": 2, 00:13:04.104 "num_base_bdevs_operational": 2, 00:13:04.104 "base_bdevs_list": [ 00:13:04.104 { 00:13:04.104 "name": "BaseBdev1", 00:13:04.104 "uuid": "33b69cef-6097-5bb2-8898-01198d07d18e", 00:13:04.104 "is_configured": true, 00:13:04.104 "data_offset": 2048, 00:13:04.104 "data_size": 63488 00:13:04.104 }, 00:13:04.104 { 00:13:04.104 "name": "BaseBdev2", 00:13:04.104 "uuid": "457fd381-e10a-575c-8575-f638a9d980dc", 00:13:04.104 "is_configured": true, 00:13:04.104 "data_offset": 2048, 00:13:04.104 "data_size": 63488 00:13:04.104 } 00:13:04.104 ] 00:13:04.104 }' 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.104 09:05:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.671 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:04.671 09:05:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:04.671 [2024-11-06 09:05:41.081456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.606 09:05:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.606 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.606 09:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.606 "name": "raid_bdev1", 00:13:05.606 "uuid": "d9d623d2-924b-4b3d-ae3f-91df664261c9", 00:13:05.606 "strip_size_kb": 64, 00:13:05.606 "state": "online", 00:13:05.606 "raid_level": "concat", 00:13:05.606 "superblock": true, 00:13:05.606 "num_base_bdevs": 2, 00:13:05.606 "num_base_bdevs_discovered": 2, 00:13:05.606 "num_base_bdevs_operational": 2, 00:13:05.606 "base_bdevs_list": [ 00:13:05.606 { 00:13:05.606 "name": "BaseBdev1", 00:13:05.606 "uuid": "33b69cef-6097-5bb2-8898-01198d07d18e", 00:13:05.606 "is_configured": true, 00:13:05.606 "data_offset": 2048, 00:13:05.606 "data_size": 63488 00:13:05.606 }, 00:13:05.606 { 00:13:05.606 "name": "BaseBdev2", 00:13:05.606 "uuid": "457fd381-e10a-575c-8575-f638a9d980dc", 00:13:05.606 "is_configured": true, 00:13:05.606 "data_offset": 2048, 00:13:05.606 "data_size": 63488 00:13:05.606 } 00:13:05.606 ] 00:13:05.606 }' 00:13:05.606 09:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.606 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.172 [2024-11-06 09:05:42.508986] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.172 [2024-11-06 09:05:42.509027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.172 [2024-11-06 09:05:42.512318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.172 [2024-11-06 09:05:42.512371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.172 [2024-11-06 09:05:42.512412] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.172 [2024-11-06 09:05:42.512432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:06.172 { 00:13:06.172 "results": [ 00:13:06.172 { 00:13:06.172 "job": "raid_bdev1", 00:13:06.172 "core_mask": "0x1", 00:13:06.172 "workload": "randrw", 00:13:06.172 "percentage": 50, 00:13:06.172 "status": "finished", 00:13:06.172 "queue_depth": 1, 00:13:06.172 "io_size": 131072, 00:13:06.172 "runtime": 1.423954, 00:13:06.172 "iops": 11290.39280763283, 00:13:06.172 "mibps": 1411.2991009541038, 00:13:06.172 "io_failed": 1, 00:13:06.172 "io_timeout": 0, 00:13:06.172 "avg_latency_us": 123.78704723563537, 00:13:06.172 "min_latency_us": 37.93454545454546, 00:13:06.172 "max_latency_us": 1899.0545454545454 00:13:06.172 } 00:13:06.172 ], 00:13:06.172 "core_count": 1 00:13:06.172 } 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62307 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 62307 ']' 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 62307 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62307 00:13:06.172 killing process with pid 62307 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62307' 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 62307 00:13:06.172 [2024-11-06 09:05:42.552789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.172 09:05:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 62307 00:13:06.172 [2024-11-06 09:05:42.671493] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:07.548 09:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pctJ48aMGl 00:13:07.548 09:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:07.548 09:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:07.548 09:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:13:07.548 09:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:07.548 ************************************ 00:13:07.548 END TEST raid_read_error_test 00:13:07.548 ************************************ 00:13:07.548 09:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:07.548 09:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:07.548 09:05:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:13:07.548 00:13:07.548 real 0m4.537s 00:13:07.548 user 0m5.723s 00:13:07.548 sys 0m0.550s 00:13:07.548 09:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:07.548 09:05:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.548 09:05:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:13:07.548 09:05:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:07.548 09:05:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:07.548 09:05:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:07.548 ************************************ 00:13:07.548 START TEST raid_write_error_test 00:13:07.548 ************************************ 00:13:07.548 09:05:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:13:07.548 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:07.548 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:07.548 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:07.548 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CqBiJG87CU 00:13:07.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62447 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62447 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 62447 ']' 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:07.549 09:05:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.549 [2024-11-06 09:05:43.914260] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:13:07.549 [2024-11-06 09:05:43.914468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62447 ] 00:13:07.807 [2024-11-06 09:05:44.101214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.807 [2024-11-06 09:05:44.238724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.065 [2024-11-06 09:05:44.433069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.065 [2024-11-06 09:05:44.433143] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.633 BaseBdev1_malloc 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.633 true 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.633 [2024-11-06 09:05:44.946200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:08.633 [2024-11-06 09:05:44.946451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.633 [2024-11-06 09:05:44.946491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:08.633 [2024-11-06 09:05:44.946511] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.633 [2024-11-06 09:05:44.949374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.633 [2024-11-06 09:05:44.949439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:08.633 BaseBdev1 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.633 BaseBdev2_malloc 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.633 09:05:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.633 true 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.633 [2024-11-06 09:05:45.013593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:08.633 [2024-11-06 09:05:45.013676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.633 [2024-11-06 09:05:45.013707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:08.633 [2024-11-06 09:05:45.013724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.633 [2024-11-06 09:05:45.016504] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.633 [2024-11-06 09:05:45.016566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:08.633 BaseBdev2 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.633 [2024-11-06 09:05:45.021710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:08.633 [2024-11-06 09:05:45.024130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.633 [2024-11-06 09:05:45.024466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:08.633 [2024-11-06 09:05:45.024491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:08.633 [2024-11-06 09:05:45.024780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:08.633 [2024-11-06 09:05:45.025042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:08.633 [2024-11-06 09:05:45.025064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:08.633 [2024-11-06 09:05:45.025256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.633 "name": "raid_bdev1", 00:13:08.633 "uuid": "8b99d231-5256-4ea6-a05f-dec8af33871c", 00:13:08.633 "strip_size_kb": 64, 00:13:08.633 "state": "online", 00:13:08.633 "raid_level": "concat", 00:13:08.633 "superblock": true, 00:13:08.633 "num_base_bdevs": 2, 00:13:08.633 "num_base_bdevs_discovered": 2, 00:13:08.633 "num_base_bdevs_operational": 2, 00:13:08.633 "base_bdevs_list": [ 00:13:08.633 { 00:13:08.633 "name": "BaseBdev1", 00:13:08.633 "uuid": "0a8717a0-1df8-5d0e-9cc6-60aa51c034d8", 00:13:08.633 "is_configured": true, 00:13:08.633 "data_offset": 2048, 00:13:08.633 "data_size": 63488 00:13:08.633 }, 00:13:08.633 { 00:13:08.633 "name": "BaseBdev2", 00:13:08.633 "uuid": "4113eadd-0bd1-5a62-9f91-858a91d9c60f", 00:13:08.633 "is_configured": true, 00:13:08.633 "data_offset": 2048, 00:13:08.633 "data_size": 63488 00:13:08.633 } 00:13:08.633 ] 00:13:08.633 }' 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.633 09:05:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.199 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:09.199 09:05:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:09.199 [2024-11-06 09:05:45.683329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.135 "name": "raid_bdev1", 00:13:10.135 "uuid": "8b99d231-5256-4ea6-a05f-dec8af33871c", 00:13:10.135 "strip_size_kb": 64, 00:13:10.135 "state": "online", 00:13:10.135 "raid_level": "concat", 00:13:10.135 "superblock": true, 00:13:10.135 "num_base_bdevs": 2, 00:13:10.135 "num_base_bdevs_discovered": 2, 00:13:10.135 "num_base_bdevs_operational": 2, 00:13:10.135 "base_bdevs_list": [ 00:13:10.135 { 00:13:10.135 "name": "BaseBdev1", 00:13:10.135 "uuid": "0a8717a0-1df8-5d0e-9cc6-60aa51c034d8", 00:13:10.135 "is_configured": true, 00:13:10.135 "data_offset": 2048, 00:13:10.135 "data_size": 63488 00:13:10.135 }, 00:13:10.135 { 00:13:10.135 "name": "BaseBdev2", 00:13:10.135 "uuid": "4113eadd-0bd1-5a62-9f91-858a91d9c60f", 00:13:10.135 "is_configured": true, 00:13:10.135 "data_offset": 2048, 00:13:10.135 "data_size": 63488 00:13:10.135 } 00:13:10.135 ] 00:13:10.135 }' 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.135 09:05:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.710 [2024-11-06 09:05:47.098572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.710 [2024-11-06 09:05:47.098615] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.710 [2024-11-06 09:05:47.102006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.710 [2024-11-06 09:05:47.102067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.710 [2024-11-06 09:05:47.102111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.710 [2024-11-06 09:05:47.102142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:10.710 { 00:13:10.710 "results": [ 00:13:10.710 { 00:13:10.710 "job": "raid_bdev1", 00:13:10.710 "core_mask": "0x1", 00:13:10.710 "workload": "randrw", 00:13:10.710 "percentage": 50, 00:13:10.710 "status": "finished", 00:13:10.710 "queue_depth": 1, 00:13:10.710 "io_size": 131072, 00:13:10.710 "runtime": 1.412677, 00:13:10.710 "iops": 10982.694557920884, 00:13:10.710 "mibps": 1372.8368197401105, 00:13:10.710 "io_failed": 1, 00:13:10.710 "io_timeout": 0, 00:13:10.710 "avg_latency_us": 126.9686189036537, 00:13:10.710 "min_latency_us": 37.93454545454546, 00:13:10.710 "max_latency_us": 1966.08 00:13:10.710 } 00:13:10.710 ], 00:13:10.710 "core_count": 1 00:13:10.710 } 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62447 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 62447 ']' 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 62447 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62447 00:13:10.710 killing process with pid 62447 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62447' 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 62447 00:13:10.710 [2024-11-06 09:05:47.141912] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:10.710 09:05:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 62447 00:13:10.968 [2024-11-06 09:05:47.265449] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.903 09:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:11.903 09:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CqBiJG87CU 00:13:11.903 09:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:11.903 09:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:11.903 09:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:11.903 09:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:11.903 09:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:11.903 09:05:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:11.903 00:13:11.903 real 0m4.553s 00:13:11.903 user 0m5.717s 00:13:11.903 sys 0m0.581s 00:13:11.903 ************************************ 00:13:11.903 END TEST raid_write_error_test 00:13:11.903 ************************************ 00:13:11.903 09:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:11.903 09:05:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.903 09:05:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:11.903 09:05:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:13:11.903 09:05:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:11.903 09:05:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:11.903 09:05:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.903 ************************************ 00:13:11.903 START TEST raid_state_function_test 00:13:11.903 ************************************ 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62591 00:13:11.903 Process raid pid: 62591 00:13:11.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62591' 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62591 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 62591 ']' 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:11.903 09:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.161 [2024-11-06 09:05:48.518114] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:13:12.161 [2024-11-06 09:05:48.518275] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.419 [2024-11-06 09:05:48.707586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.419 [2024-11-06 09:05:48.867039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.677 [2024-11-06 09:05:49.091696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.678 [2024-11-06 09:05:49.092008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.244 [2024-11-06 09:05:49.497088] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.244 [2024-11-06 09:05:49.497153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.244 [2024-11-06 09:05:49.497172] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.244 [2024-11-06 09:05:49.497189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.244 "name": "Existed_Raid", 00:13:13.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.244 "strip_size_kb": 0, 00:13:13.244 "state": "configuring", 00:13:13.244 "raid_level": "raid1", 00:13:13.244 "superblock": false, 00:13:13.244 "num_base_bdevs": 2, 00:13:13.244 "num_base_bdevs_discovered": 0, 00:13:13.244 "num_base_bdevs_operational": 2, 00:13:13.244 "base_bdevs_list": [ 00:13:13.244 { 00:13:13.244 "name": "BaseBdev1", 00:13:13.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.244 "is_configured": false, 00:13:13.244 "data_offset": 0, 00:13:13.244 "data_size": 0 00:13:13.244 }, 00:13:13.244 { 00:13:13.244 "name": "BaseBdev2", 00:13:13.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.244 "is_configured": false, 00:13:13.244 "data_offset": 0, 00:13:13.244 "data_size": 0 00:13:13.244 } 00:13:13.244 ] 00:13:13.244 }' 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.244 09:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.503 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:13.503 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.503 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.503 [2024-11-06 09:05:50.017240] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.503 [2024-11-06 09:05:50.017280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:13.503 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.503 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:13.503 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.503 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.503 [2024-11-06 09:05:50.029215] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.503 [2024-11-06 09:05:50.029439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.503 [2024-11-06 09:05:50.029564] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.503 [2024-11-06 09:05:50.029627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.503 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.503 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:13.503 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.503 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.761 [2024-11-06 09:05:50.074554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:13.761 BaseBdev1 00:13:13.761 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.761 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:13.761 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:13.761 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:13.761 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:13.761 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:13.761 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:13.761 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.762 [ 00:13:13.762 { 00:13:13.762 "name": "BaseBdev1", 00:13:13.762 "aliases": [ 00:13:13.762 "8c5f5f56-8a7d-439f-9f70-2bd06d1297f1" 00:13:13.762 ], 00:13:13.762 "product_name": "Malloc disk", 00:13:13.762 "block_size": 512, 00:13:13.762 "num_blocks": 65536, 00:13:13.762 "uuid": "8c5f5f56-8a7d-439f-9f70-2bd06d1297f1", 00:13:13.762 "assigned_rate_limits": { 00:13:13.762 "rw_ios_per_sec": 0, 00:13:13.762 "rw_mbytes_per_sec": 0, 00:13:13.762 "r_mbytes_per_sec": 0, 00:13:13.762 "w_mbytes_per_sec": 0 00:13:13.762 }, 00:13:13.762 "claimed": true, 00:13:13.762 "claim_type": "exclusive_write", 00:13:13.762 "zoned": false, 00:13:13.762 "supported_io_types": { 00:13:13.762 "read": true, 00:13:13.762 "write": true, 00:13:13.762 "unmap": true, 00:13:13.762 "flush": true, 00:13:13.762 "reset": true, 00:13:13.762 "nvme_admin": false, 00:13:13.762 "nvme_io": false, 00:13:13.762 "nvme_io_md": false, 00:13:13.762 "write_zeroes": true, 00:13:13.762 "zcopy": true, 00:13:13.762 "get_zone_info": false, 00:13:13.762 "zone_management": false, 00:13:13.762 "zone_append": false, 00:13:13.762 "compare": false, 00:13:13.762 "compare_and_write": false, 00:13:13.762 "abort": true, 00:13:13.762 "seek_hole": false, 00:13:13.762 "seek_data": false, 00:13:13.762 "copy": true, 00:13:13.762 "nvme_iov_md": false 00:13:13.762 }, 00:13:13.762 "memory_domains": [ 00:13:13.762 { 00:13:13.762 "dma_device_id": "system", 00:13:13.762 "dma_device_type": 1 00:13:13.762 }, 00:13:13.762 { 00:13:13.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.762 "dma_device_type": 2 00:13:13.762 } 00:13:13.762 ], 00:13:13.762 "driver_specific": {} 00:13:13.762 } 00:13:13.762 ] 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.762 "name": "Existed_Raid", 00:13:13.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.762 "strip_size_kb": 0, 00:13:13.762 "state": "configuring", 00:13:13.762 "raid_level": "raid1", 00:13:13.762 "superblock": false, 00:13:13.762 "num_base_bdevs": 2, 00:13:13.762 "num_base_bdevs_discovered": 1, 00:13:13.762 "num_base_bdevs_operational": 2, 00:13:13.762 "base_bdevs_list": [ 00:13:13.762 { 00:13:13.762 "name": "BaseBdev1", 00:13:13.762 "uuid": "8c5f5f56-8a7d-439f-9f70-2bd06d1297f1", 00:13:13.762 "is_configured": true, 00:13:13.762 "data_offset": 0, 00:13:13.762 "data_size": 65536 00:13:13.762 }, 00:13:13.762 { 00:13:13.762 "name": "BaseBdev2", 00:13:13.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.762 "is_configured": false, 00:13:13.762 "data_offset": 0, 00:13:13.762 "data_size": 0 00:13:13.762 } 00:13:13.762 ] 00:13:13.762 }' 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.762 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.328 [2024-11-06 09:05:50.630778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.328 [2024-11-06 09:05:50.630998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.328 [2024-11-06 09:05:50.638859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.328 [2024-11-06 09:05:50.641281] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.328 [2024-11-06 09:05:50.641329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.328 "name": "Existed_Raid", 00:13:14.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.328 "strip_size_kb": 0, 00:13:14.328 "state": "configuring", 00:13:14.328 "raid_level": "raid1", 00:13:14.328 "superblock": false, 00:13:14.328 "num_base_bdevs": 2, 00:13:14.328 "num_base_bdevs_discovered": 1, 00:13:14.328 "num_base_bdevs_operational": 2, 00:13:14.328 "base_bdevs_list": [ 00:13:14.328 { 00:13:14.328 "name": "BaseBdev1", 00:13:14.328 "uuid": "8c5f5f56-8a7d-439f-9f70-2bd06d1297f1", 00:13:14.328 "is_configured": true, 00:13:14.328 "data_offset": 0, 00:13:14.328 "data_size": 65536 00:13:14.328 }, 00:13:14.328 { 00:13:14.328 "name": "BaseBdev2", 00:13:14.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.328 "is_configured": false, 00:13:14.328 "data_offset": 0, 00:13:14.328 "data_size": 0 00:13:14.328 } 00:13:14.328 ] 00:13:14.328 }' 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.328 09:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.895 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.896 [2024-11-06 09:05:51.201899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.896 [2024-11-06 09:05:51.201964] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:14.896 [2024-11-06 09:05:51.201978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:14.896 [2024-11-06 09:05:51.202367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:14.896 [2024-11-06 09:05:51.202566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:14.896 [2024-11-06 09:05:51.202589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:14.896 [2024-11-06 09:05:51.202958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.896 BaseBdev2 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.896 [ 00:13:14.896 { 00:13:14.896 "name": "BaseBdev2", 00:13:14.896 "aliases": [ 00:13:14.896 "a6614d3a-79f5-4781-a68a-cba80737ae22" 00:13:14.896 ], 00:13:14.896 "product_name": "Malloc disk", 00:13:14.896 "block_size": 512, 00:13:14.896 "num_blocks": 65536, 00:13:14.896 "uuid": "a6614d3a-79f5-4781-a68a-cba80737ae22", 00:13:14.896 "assigned_rate_limits": { 00:13:14.896 "rw_ios_per_sec": 0, 00:13:14.896 "rw_mbytes_per_sec": 0, 00:13:14.896 "r_mbytes_per_sec": 0, 00:13:14.896 "w_mbytes_per_sec": 0 00:13:14.896 }, 00:13:14.896 "claimed": true, 00:13:14.896 "claim_type": "exclusive_write", 00:13:14.896 "zoned": false, 00:13:14.896 "supported_io_types": { 00:13:14.896 "read": true, 00:13:14.896 "write": true, 00:13:14.896 "unmap": true, 00:13:14.896 "flush": true, 00:13:14.896 "reset": true, 00:13:14.896 "nvme_admin": false, 00:13:14.896 "nvme_io": false, 00:13:14.896 "nvme_io_md": false, 00:13:14.896 "write_zeroes": true, 00:13:14.896 "zcopy": true, 00:13:14.896 "get_zone_info": false, 00:13:14.896 "zone_management": false, 00:13:14.896 "zone_append": false, 00:13:14.896 "compare": false, 00:13:14.896 "compare_and_write": false, 00:13:14.896 "abort": true, 00:13:14.896 "seek_hole": false, 00:13:14.896 "seek_data": false, 00:13:14.896 "copy": true, 00:13:14.896 "nvme_iov_md": false 00:13:14.896 }, 00:13:14.896 "memory_domains": [ 00:13:14.896 { 00:13:14.896 "dma_device_id": "system", 00:13:14.896 "dma_device_type": 1 00:13:14.896 }, 00:13:14.896 { 00:13:14.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.896 "dma_device_type": 2 00:13:14.896 } 00:13:14.896 ], 00:13:14.896 "driver_specific": {} 00:13:14.896 } 00:13:14.896 ] 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.896 "name": "Existed_Raid", 00:13:14.896 "uuid": "7c41b712-6f9b-43d5-9f59-9706c70b08bc", 00:13:14.896 "strip_size_kb": 0, 00:13:14.896 "state": "online", 00:13:14.896 "raid_level": "raid1", 00:13:14.896 "superblock": false, 00:13:14.896 "num_base_bdevs": 2, 00:13:14.896 "num_base_bdevs_discovered": 2, 00:13:14.896 "num_base_bdevs_operational": 2, 00:13:14.896 "base_bdevs_list": [ 00:13:14.896 { 00:13:14.896 "name": "BaseBdev1", 00:13:14.896 "uuid": "8c5f5f56-8a7d-439f-9f70-2bd06d1297f1", 00:13:14.896 "is_configured": true, 00:13:14.896 "data_offset": 0, 00:13:14.896 "data_size": 65536 00:13:14.896 }, 00:13:14.896 { 00:13:14.896 "name": "BaseBdev2", 00:13:14.896 "uuid": "a6614d3a-79f5-4781-a68a-cba80737ae22", 00:13:14.896 "is_configured": true, 00:13:14.896 "data_offset": 0, 00:13:14.896 "data_size": 65536 00:13:14.896 } 00:13:14.896 ] 00:13:14.896 }' 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.896 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.463 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:15.463 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:15.463 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:15.463 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:15.463 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:15.463 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:15.463 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:15.463 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.463 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.463 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:15.463 [2024-11-06 09:05:51.762466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.463 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.463 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:15.463 "name": "Existed_Raid", 00:13:15.463 "aliases": [ 00:13:15.463 "7c41b712-6f9b-43d5-9f59-9706c70b08bc" 00:13:15.463 ], 00:13:15.463 "product_name": "Raid Volume", 00:13:15.463 "block_size": 512, 00:13:15.463 "num_blocks": 65536, 00:13:15.463 "uuid": "7c41b712-6f9b-43d5-9f59-9706c70b08bc", 00:13:15.463 "assigned_rate_limits": { 00:13:15.464 "rw_ios_per_sec": 0, 00:13:15.464 "rw_mbytes_per_sec": 0, 00:13:15.464 "r_mbytes_per_sec": 0, 00:13:15.464 "w_mbytes_per_sec": 0 00:13:15.464 }, 00:13:15.464 "claimed": false, 00:13:15.464 "zoned": false, 00:13:15.464 "supported_io_types": { 00:13:15.464 "read": true, 00:13:15.464 "write": true, 00:13:15.464 "unmap": false, 00:13:15.464 "flush": false, 00:13:15.464 "reset": true, 00:13:15.464 "nvme_admin": false, 00:13:15.464 "nvme_io": false, 00:13:15.464 "nvme_io_md": false, 00:13:15.464 "write_zeroes": true, 00:13:15.464 "zcopy": false, 00:13:15.464 "get_zone_info": false, 00:13:15.464 "zone_management": false, 00:13:15.464 "zone_append": false, 00:13:15.464 "compare": false, 00:13:15.464 "compare_and_write": false, 00:13:15.464 "abort": false, 00:13:15.464 "seek_hole": false, 00:13:15.464 "seek_data": false, 00:13:15.464 "copy": false, 00:13:15.464 "nvme_iov_md": false 00:13:15.464 }, 00:13:15.464 "memory_domains": [ 00:13:15.464 { 00:13:15.464 "dma_device_id": "system", 00:13:15.464 "dma_device_type": 1 00:13:15.464 }, 00:13:15.464 { 00:13:15.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.464 "dma_device_type": 2 00:13:15.464 }, 00:13:15.464 { 00:13:15.464 "dma_device_id": "system", 00:13:15.464 "dma_device_type": 1 00:13:15.464 }, 00:13:15.464 { 00:13:15.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.464 "dma_device_type": 2 00:13:15.464 } 00:13:15.464 ], 00:13:15.464 "driver_specific": { 00:13:15.464 "raid": { 00:13:15.464 "uuid": "7c41b712-6f9b-43d5-9f59-9706c70b08bc", 00:13:15.464 "strip_size_kb": 0, 00:13:15.464 "state": "online", 00:13:15.464 "raid_level": "raid1", 00:13:15.464 "superblock": false, 00:13:15.464 "num_base_bdevs": 2, 00:13:15.464 "num_base_bdevs_discovered": 2, 00:13:15.464 "num_base_bdevs_operational": 2, 00:13:15.464 "base_bdevs_list": [ 00:13:15.464 { 00:13:15.464 "name": "BaseBdev1", 00:13:15.464 "uuid": "8c5f5f56-8a7d-439f-9f70-2bd06d1297f1", 00:13:15.464 "is_configured": true, 00:13:15.464 "data_offset": 0, 00:13:15.464 "data_size": 65536 00:13:15.464 }, 00:13:15.464 { 00:13:15.464 "name": "BaseBdev2", 00:13:15.464 "uuid": "a6614d3a-79f5-4781-a68a-cba80737ae22", 00:13:15.464 "is_configured": true, 00:13:15.464 "data_offset": 0, 00:13:15.464 "data_size": 65536 00:13:15.464 } 00:13:15.464 ] 00:13:15.464 } 00:13:15.464 } 00:13:15.464 }' 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:15.464 BaseBdev2' 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.464 09:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.464 [2024-11-06 09:05:51.990238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.723 "name": "Existed_Raid", 00:13:15.723 "uuid": "7c41b712-6f9b-43d5-9f59-9706c70b08bc", 00:13:15.723 "strip_size_kb": 0, 00:13:15.723 "state": "online", 00:13:15.723 "raid_level": "raid1", 00:13:15.723 "superblock": false, 00:13:15.723 "num_base_bdevs": 2, 00:13:15.723 "num_base_bdevs_discovered": 1, 00:13:15.723 "num_base_bdevs_operational": 1, 00:13:15.723 "base_bdevs_list": [ 00:13:15.723 { 00:13:15.723 "name": null, 00:13:15.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.723 "is_configured": false, 00:13:15.723 "data_offset": 0, 00:13:15.723 "data_size": 65536 00:13:15.723 }, 00:13:15.723 { 00:13:15.723 "name": "BaseBdev2", 00:13:15.723 "uuid": "a6614d3a-79f5-4781-a68a-cba80737ae22", 00:13:15.723 "is_configured": true, 00:13:15.723 "data_offset": 0, 00:13:15.723 "data_size": 65536 00:13:15.723 } 00:13:15.723 ] 00:13:15.723 }' 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.723 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.301 [2024-11-06 09:05:52.676345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:16.301 [2024-11-06 09:05:52.676475] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.301 [2024-11-06 09:05:52.760276] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.301 [2024-11-06 09:05:52.760348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.301 [2024-11-06 09:05:52.760384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.301 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62591 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 62591 ']' 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 62591 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62591 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62591' 00:13:16.584 killing process with pid 62591 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 62591 00:13:16.584 [2024-11-06 09:05:52.869472] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.584 09:05:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 62591 00:13:16.584 [2024-11-06 09:05:52.884637] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:17.518 00:13:17.518 real 0m5.488s 00:13:17.518 user 0m8.280s 00:13:17.518 sys 0m0.821s 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:17.518 ************************************ 00:13:17.518 END TEST raid_state_function_test 00:13:17.518 ************************************ 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.518 09:05:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:13:17.518 09:05:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:17.518 09:05:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:17.518 09:05:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:17.518 ************************************ 00:13:17.518 START TEST raid_state_function_test_sb 00:13:17.518 ************************************ 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:17.518 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62849 00:13:17.519 Process raid pid: 62849 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62849' 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62849 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 62849 ']' 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:17.519 09:05:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.519 [2024-11-06 09:05:54.045191] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:13:17.519 [2024-11-06 09:05:54.045384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.777 [2024-11-06 09:05:54.236129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.036 [2024-11-06 09:05:54.395439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.294 [2024-11-06 09:05:54.602320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.294 [2024-11-06 09:05:54.602377] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.860 [2024-11-06 09:05:55.092480] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.860 [2024-11-06 09:05:55.092541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.860 [2024-11-06 09:05:55.092559] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.860 [2024-11-06 09:05:55.092576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.860 "name": "Existed_Raid", 00:13:18.860 "uuid": "692dcf8d-6b04-4fd2-a44a-e0f2d1416d42", 00:13:18.860 "strip_size_kb": 0, 00:13:18.860 "state": "configuring", 00:13:18.860 "raid_level": "raid1", 00:13:18.860 "superblock": true, 00:13:18.860 "num_base_bdevs": 2, 00:13:18.860 "num_base_bdevs_discovered": 0, 00:13:18.860 "num_base_bdevs_operational": 2, 00:13:18.860 "base_bdevs_list": [ 00:13:18.860 { 00:13:18.860 "name": "BaseBdev1", 00:13:18.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.860 "is_configured": false, 00:13:18.860 "data_offset": 0, 00:13:18.860 "data_size": 0 00:13:18.860 }, 00:13:18.860 { 00:13:18.860 "name": "BaseBdev2", 00:13:18.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.860 "is_configured": false, 00:13:18.860 "data_offset": 0, 00:13:18.860 "data_size": 0 00:13:18.860 } 00:13:18.860 ] 00:13:18.860 }' 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.860 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.118 [2024-11-06 09:05:55.596567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.118 [2024-11-06 09:05:55.596612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.118 [2024-11-06 09:05:55.604547] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:19.118 [2024-11-06 09:05:55.604609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:19.118 [2024-11-06 09:05:55.604641] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.118 [2024-11-06 09:05:55.604660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.118 [2024-11-06 09:05:55.649226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.118 BaseBdev1 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:19.118 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:19.376 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:19.376 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:19.376 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.376 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.376 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.376 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:19.376 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.376 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.376 [ 00:13:19.376 { 00:13:19.376 "name": "BaseBdev1", 00:13:19.376 "aliases": [ 00:13:19.376 "1e17c431-cb27-4f41-a41d-87cb4bb60507" 00:13:19.376 ], 00:13:19.376 "product_name": "Malloc disk", 00:13:19.376 "block_size": 512, 00:13:19.376 "num_blocks": 65536, 00:13:19.376 "uuid": "1e17c431-cb27-4f41-a41d-87cb4bb60507", 00:13:19.376 "assigned_rate_limits": { 00:13:19.376 "rw_ios_per_sec": 0, 00:13:19.376 "rw_mbytes_per_sec": 0, 00:13:19.376 "r_mbytes_per_sec": 0, 00:13:19.376 "w_mbytes_per_sec": 0 00:13:19.376 }, 00:13:19.376 "claimed": true, 00:13:19.376 "claim_type": "exclusive_write", 00:13:19.376 "zoned": false, 00:13:19.376 "supported_io_types": { 00:13:19.376 "read": true, 00:13:19.376 "write": true, 00:13:19.376 "unmap": true, 00:13:19.376 "flush": true, 00:13:19.376 "reset": true, 00:13:19.376 "nvme_admin": false, 00:13:19.376 "nvme_io": false, 00:13:19.376 "nvme_io_md": false, 00:13:19.376 "write_zeroes": true, 00:13:19.376 "zcopy": true, 00:13:19.377 "get_zone_info": false, 00:13:19.377 "zone_management": false, 00:13:19.377 "zone_append": false, 00:13:19.377 "compare": false, 00:13:19.377 "compare_and_write": false, 00:13:19.377 "abort": true, 00:13:19.377 "seek_hole": false, 00:13:19.377 "seek_data": false, 00:13:19.377 "copy": true, 00:13:19.377 "nvme_iov_md": false 00:13:19.377 }, 00:13:19.377 "memory_domains": [ 00:13:19.377 { 00:13:19.377 "dma_device_id": "system", 00:13:19.377 "dma_device_type": 1 00:13:19.377 }, 00:13:19.377 { 00:13:19.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.377 "dma_device_type": 2 00:13:19.377 } 00:13:19.377 ], 00:13:19.377 "driver_specific": {} 00:13:19.377 } 00:13:19.377 ] 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.377 "name": "Existed_Raid", 00:13:19.377 "uuid": "34bf2a1b-0be0-4f37-a3b7-9e842d9b6bbd", 00:13:19.377 "strip_size_kb": 0, 00:13:19.377 "state": "configuring", 00:13:19.377 "raid_level": "raid1", 00:13:19.377 "superblock": true, 00:13:19.377 "num_base_bdevs": 2, 00:13:19.377 "num_base_bdevs_discovered": 1, 00:13:19.377 "num_base_bdevs_operational": 2, 00:13:19.377 "base_bdevs_list": [ 00:13:19.377 { 00:13:19.377 "name": "BaseBdev1", 00:13:19.377 "uuid": "1e17c431-cb27-4f41-a41d-87cb4bb60507", 00:13:19.377 "is_configured": true, 00:13:19.377 "data_offset": 2048, 00:13:19.377 "data_size": 63488 00:13:19.377 }, 00:13:19.377 { 00:13:19.377 "name": "BaseBdev2", 00:13:19.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.377 "is_configured": false, 00:13:19.377 "data_offset": 0, 00:13:19.377 "data_size": 0 00:13:19.377 } 00:13:19.377 ] 00:13:19.377 }' 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.377 09:05:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.943 [2024-11-06 09:05:56.205449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.943 [2024-11-06 09:05:56.205527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.943 [2024-11-06 09:05:56.213473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.943 [2024-11-06 09:05:56.215973] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:19.943 [2024-11-06 09:05:56.216024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.943 "name": "Existed_Raid", 00:13:19.943 "uuid": "3a1483c1-01b8-43c2-9e0a-2afcae8a3a70", 00:13:19.943 "strip_size_kb": 0, 00:13:19.943 "state": "configuring", 00:13:19.943 "raid_level": "raid1", 00:13:19.943 "superblock": true, 00:13:19.943 "num_base_bdevs": 2, 00:13:19.943 "num_base_bdevs_discovered": 1, 00:13:19.943 "num_base_bdevs_operational": 2, 00:13:19.943 "base_bdevs_list": [ 00:13:19.943 { 00:13:19.943 "name": "BaseBdev1", 00:13:19.943 "uuid": "1e17c431-cb27-4f41-a41d-87cb4bb60507", 00:13:19.943 "is_configured": true, 00:13:19.943 "data_offset": 2048, 00:13:19.943 "data_size": 63488 00:13:19.943 }, 00:13:19.943 { 00:13:19.943 "name": "BaseBdev2", 00:13:19.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.943 "is_configured": false, 00:13:19.943 "data_offset": 0, 00:13:19.943 "data_size": 0 00:13:19.943 } 00:13:19.943 ] 00:13:19.943 }' 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.943 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.510 [2024-11-06 09:05:56.792324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.510 [2024-11-06 09:05:56.792670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:20.510 [2024-11-06 09:05:56.792689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:20.510 [2024-11-06 09:05:56.793051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:20.510 BaseBdev2 00:13:20.510 [2024-11-06 09:05:56.793248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:20.510 [2024-11-06 09:05:56.793269] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:20.510 [2024-11-06 09:05:56.793442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.510 [ 00:13:20.510 { 00:13:20.510 "name": "BaseBdev2", 00:13:20.510 "aliases": [ 00:13:20.510 "8f2d3a8d-4181-4342-a79a-5cfa3d766fde" 00:13:20.510 ], 00:13:20.510 "product_name": "Malloc disk", 00:13:20.510 "block_size": 512, 00:13:20.510 "num_blocks": 65536, 00:13:20.510 "uuid": "8f2d3a8d-4181-4342-a79a-5cfa3d766fde", 00:13:20.510 "assigned_rate_limits": { 00:13:20.510 "rw_ios_per_sec": 0, 00:13:20.510 "rw_mbytes_per_sec": 0, 00:13:20.510 "r_mbytes_per_sec": 0, 00:13:20.510 "w_mbytes_per_sec": 0 00:13:20.510 }, 00:13:20.510 "claimed": true, 00:13:20.510 "claim_type": "exclusive_write", 00:13:20.510 "zoned": false, 00:13:20.510 "supported_io_types": { 00:13:20.510 "read": true, 00:13:20.510 "write": true, 00:13:20.510 "unmap": true, 00:13:20.510 "flush": true, 00:13:20.510 "reset": true, 00:13:20.510 "nvme_admin": false, 00:13:20.510 "nvme_io": false, 00:13:20.510 "nvme_io_md": false, 00:13:20.510 "write_zeroes": true, 00:13:20.510 "zcopy": true, 00:13:20.510 "get_zone_info": false, 00:13:20.510 "zone_management": false, 00:13:20.510 "zone_append": false, 00:13:20.510 "compare": false, 00:13:20.510 "compare_and_write": false, 00:13:20.510 "abort": true, 00:13:20.510 "seek_hole": false, 00:13:20.510 "seek_data": false, 00:13:20.510 "copy": true, 00:13:20.510 "nvme_iov_md": false 00:13:20.510 }, 00:13:20.510 "memory_domains": [ 00:13:20.510 { 00:13:20.510 "dma_device_id": "system", 00:13:20.510 "dma_device_type": 1 00:13:20.510 }, 00:13:20.510 { 00:13:20.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.510 "dma_device_type": 2 00:13:20.510 } 00:13:20.510 ], 00:13:20.510 "driver_specific": {} 00:13:20.510 } 00:13:20.510 ] 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.510 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.510 "name": "Existed_Raid", 00:13:20.510 "uuid": "3a1483c1-01b8-43c2-9e0a-2afcae8a3a70", 00:13:20.510 "strip_size_kb": 0, 00:13:20.510 "state": "online", 00:13:20.510 "raid_level": "raid1", 00:13:20.510 "superblock": true, 00:13:20.510 "num_base_bdevs": 2, 00:13:20.510 "num_base_bdevs_discovered": 2, 00:13:20.510 "num_base_bdevs_operational": 2, 00:13:20.510 "base_bdevs_list": [ 00:13:20.510 { 00:13:20.510 "name": "BaseBdev1", 00:13:20.510 "uuid": "1e17c431-cb27-4f41-a41d-87cb4bb60507", 00:13:20.510 "is_configured": true, 00:13:20.510 "data_offset": 2048, 00:13:20.510 "data_size": 63488 00:13:20.511 }, 00:13:20.511 { 00:13:20.511 "name": "BaseBdev2", 00:13:20.511 "uuid": "8f2d3a8d-4181-4342-a79a-5cfa3d766fde", 00:13:20.511 "is_configured": true, 00:13:20.511 "data_offset": 2048, 00:13:20.511 "data_size": 63488 00:13:20.511 } 00:13:20.511 ] 00:13:20.511 }' 00:13:20.511 09:05:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.511 09:05:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:21.077 [2024-11-06 09:05:57.345010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:21.077 "name": "Existed_Raid", 00:13:21.077 "aliases": [ 00:13:21.077 "3a1483c1-01b8-43c2-9e0a-2afcae8a3a70" 00:13:21.077 ], 00:13:21.077 "product_name": "Raid Volume", 00:13:21.077 "block_size": 512, 00:13:21.077 "num_blocks": 63488, 00:13:21.077 "uuid": "3a1483c1-01b8-43c2-9e0a-2afcae8a3a70", 00:13:21.077 "assigned_rate_limits": { 00:13:21.077 "rw_ios_per_sec": 0, 00:13:21.077 "rw_mbytes_per_sec": 0, 00:13:21.077 "r_mbytes_per_sec": 0, 00:13:21.077 "w_mbytes_per_sec": 0 00:13:21.077 }, 00:13:21.077 "claimed": false, 00:13:21.077 "zoned": false, 00:13:21.077 "supported_io_types": { 00:13:21.077 "read": true, 00:13:21.077 "write": true, 00:13:21.077 "unmap": false, 00:13:21.077 "flush": false, 00:13:21.077 "reset": true, 00:13:21.077 "nvme_admin": false, 00:13:21.077 "nvme_io": false, 00:13:21.077 "nvme_io_md": false, 00:13:21.077 "write_zeroes": true, 00:13:21.077 "zcopy": false, 00:13:21.077 "get_zone_info": false, 00:13:21.077 "zone_management": false, 00:13:21.077 "zone_append": false, 00:13:21.077 "compare": false, 00:13:21.077 "compare_and_write": false, 00:13:21.077 "abort": false, 00:13:21.077 "seek_hole": false, 00:13:21.077 "seek_data": false, 00:13:21.077 "copy": false, 00:13:21.077 "nvme_iov_md": false 00:13:21.077 }, 00:13:21.077 "memory_domains": [ 00:13:21.077 { 00:13:21.077 "dma_device_id": "system", 00:13:21.077 "dma_device_type": 1 00:13:21.077 }, 00:13:21.077 { 00:13:21.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.077 "dma_device_type": 2 00:13:21.077 }, 00:13:21.077 { 00:13:21.077 "dma_device_id": "system", 00:13:21.077 "dma_device_type": 1 00:13:21.077 }, 00:13:21.077 { 00:13:21.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.077 "dma_device_type": 2 00:13:21.077 } 00:13:21.077 ], 00:13:21.077 "driver_specific": { 00:13:21.077 "raid": { 00:13:21.077 "uuid": "3a1483c1-01b8-43c2-9e0a-2afcae8a3a70", 00:13:21.077 "strip_size_kb": 0, 00:13:21.077 "state": "online", 00:13:21.077 "raid_level": "raid1", 00:13:21.077 "superblock": true, 00:13:21.077 "num_base_bdevs": 2, 00:13:21.077 "num_base_bdevs_discovered": 2, 00:13:21.077 "num_base_bdevs_operational": 2, 00:13:21.077 "base_bdevs_list": [ 00:13:21.077 { 00:13:21.077 "name": "BaseBdev1", 00:13:21.077 "uuid": "1e17c431-cb27-4f41-a41d-87cb4bb60507", 00:13:21.077 "is_configured": true, 00:13:21.077 "data_offset": 2048, 00:13:21.077 "data_size": 63488 00:13:21.077 }, 00:13:21.077 { 00:13:21.077 "name": "BaseBdev2", 00:13:21.077 "uuid": "8f2d3a8d-4181-4342-a79a-5cfa3d766fde", 00:13:21.077 "is_configured": true, 00:13:21.077 "data_offset": 2048, 00:13:21.077 "data_size": 63488 00:13:21.077 } 00:13:21.077 ] 00:13:21.077 } 00:13:21.077 } 00:13:21.077 }' 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:21.077 BaseBdev2' 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.077 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.336 [2024-11-06 09:05:57.620790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:13:21.336 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.337 "name": "Existed_Raid", 00:13:21.337 "uuid": "3a1483c1-01b8-43c2-9e0a-2afcae8a3a70", 00:13:21.337 "strip_size_kb": 0, 00:13:21.337 "state": "online", 00:13:21.337 "raid_level": "raid1", 00:13:21.337 "superblock": true, 00:13:21.337 "num_base_bdevs": 2, 00:13:21.337 "num_base_bdevs_discovered": 1, 00:13:21.337 "num_base_bdevs_operational": 1, 00:13:21.337 "base_bdevs_list": [ 00:13:21.337 { 00:13:21.337 "name": null, 00:13:21.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.337 "is_configured": false, 00:13:21.337 "data_offset": 0, 00:13:21.337 "data_size": 63488 00:13:21.337 }, 00:13:21.337 { 00:13:21.337 "name": "BaseBdev2", 00:13:21.337 "uuid": "8f2d3a8d-4181-4342-a79a-5cfa3d766fde", 00:13:21.337 "is_configured": true, 00:13:21.337 "data_offset": 2048, 00:13:21.337 "data_size": 63488 00:13:21.337 } 00:13:21.337 ] 00:13:21.337 }' 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.337 09:05:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.921 [2024-11-06 09:05:58.263018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:21.921 [2024-11-06 09:05:58.263149] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:21.921 [2024-11-06 09:05:58.347364] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:21.921 [2024-11-06 09:05:58.347450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:21.921 [2024-11-06 09:05:58.347473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62849 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 62849 ']' 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 62849 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62849 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:21.921 killing process with pid 62849 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62849' 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 62849 00:13:21.921 [2024-11-06 09:05:58.438140] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:21.921 09:05:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 62849 00:13:21.921 [2024-11-06 09:05:58.452969] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:23.301 09:05:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:23.301 ************************************ 00:13:23.301 END TEST raid_state_function_test_sb 00:13:23.301 ************************************ 00:13:23.301 00:13:23.301 real 0m5.511s 00:13:23.301 user 0m8.372s 00:13:23.301 sys 0m0.821s 00:13:23.301 09:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:23.301 09:05:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.301 09:05:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:13:23.301 09:05:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:23.301 09:05:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:23.301 09:05:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:23.301 ************************************ 00:13:23.301 START TEST raid_superblock_test 00:13:23.301 ************************************ 00:13:23.301 09:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:13:23.301 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:23.301 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:13:23.301 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:23.301 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:23.301 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:23.301 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:23.301 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:23.301 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:23.301 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:23.301 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:23.301 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63107 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63107 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 63107 ']' 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:23.302 09:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.302 [2024-11-06 09:05:59.609800] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:13:23.302 [2024-11-06 09:05:59.610031] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63107 ] 00:13:23.302 [2024-11-06 09:05:59.797723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.561 [2024-11-06 09:05:59.932185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.818 [2024-11-06 09:06:00.136438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:23.818 [2024-11-06 09:06:00.136509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.385 malloc1 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.385 [2024-11-06 09:06:00.665256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:24.385 [2024-11-06 09:06:00.665367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.385 [2024-11-06 09:06:00.665398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:24.385 [2024-11-06 09:06:00.665414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.385 [2024-11-06 09:06:00.668280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.385 [2024-11-06 09:06:00.668322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:24.385 pt1 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.385 malloc2 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.385 [2024-11-06 09:06:00.721307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:24.385 [2024-11-06 09:06:00.721375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:24.385 [2024-11-06 09:06:00.721406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:24.385 [2024-11-06 09:06:00.721422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:24.385 [2024-11-06 09:06:00.724143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:24.385 [2024-11-06 09:06:00.724189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:24.385 pt2 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.385 [2024-11-06 09:06:00.733387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:24.385 [2024-11-06 09:06:00.735795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:24.385 [2024-11-06 09:06:00.736020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:24.385 [2024-11-06 09:06:00.736044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:24.385 [2024-11-06 09:06:00.736345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:13:24.385 [2024-11-06 09:06:00.736539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:24.385 [2024-11-06 09:06:00.736564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:24.385 [2024-11-06 09:06:00.736745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.385 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.386 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.386 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.386 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.386 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.386 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.386 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.386 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.386 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.386 "name": "raid_bdev1", 00:13:24.386 "uuid": "fe561237-4d9f-484c-8304-288cdb5d100f", 00:13:24.386 "strip_size_kb": 0, 00:13:24.386 "state": "online", 00:13:24.386 "raid_level": "raid1", 00:13:24.386 "superblock": true, 00:13:24.386 "num_base_bdevs": 2, 00:13:24.386 "num_base_bdevs_discovered": 2, 00:13:24.386 "num_base_bdevs_operational": 2, 00:13:24.386 "base_bdevs_list": [ 00:13:24.386 { 00:13:24.386 "name": "pt1", 00:13:24.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.386 "is_configured": true, 00:13:24.386 "data_offset": 2048, 00:13:24.386 "data_size": 63488 00:13:24.386 }, 00:13:24.386 { 00:13:24.386 "name": "pt2", 00:13:24.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.386 "is_configured": true, 00:13:24.386 "data_offset": 2048, 00:13:24.386 "data_size": 63488 00:13:24.386 } 00:13:24.386 ] 00:13:24.386 }' 00:13:24.386 09:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.386 09:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.952 [2024-11-06 09:06:01.241880] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.952 "name": "raid_bdev1", 00:13:24.952 "aliases": [ 00:13:24.952 "fe561237-4d9f-484c-8304-288cdb5d100f" 00:13:24.952 ], 00:13:24.952 "product_name": "Raid Volume", 00:13:24.952 "block_size": 512, 00:13:24.952 "num_blocks": 63488, 00:13:24.952 "uuid": "fe561237-4d9f-484c-8304-288cdb5d100f", 00:13:24.952 "assigned_rate_limits": { 00:13:24.952 "rw_ios_per_sec": 0, 00:13:24.952 "rw_mbytes_per_sec": 0, 00:13:24.952 "r_mbytes_per_sec": 0, 00:13:24.952 "w_mbytes_per_sec": 0 00:13:24.952 }, 00:13:24.952 "claimed": false, 00:13:24.952 "zoned": false, 00:13:24.952 "supported_io_types": { 00:13:24.952 "read": true, 00:13:24.952 "write": true, 00:13:24.952 "unmap": false, 00:13:24.952 "flush": false, 00:13:24.952 "reset": true, 00:13:24.952 "nvme_admin": false, 00:13:24.952 "nvme_io": false, 00:13:24.952 "nvme_io_md": false, 00:13:24.952 "write_zeroes": true, 00:13:24.952 "zcopy": false, 00:13:24.952 "get_zone_info": false, 00:13:24.952 "zone_management": false, 00:13:24.952 "zone_append": false, 00:13:24.952 "compare": false, 00:13:24.952 "compare_and_write": false, 00:13:24.952 "abort": false, 00:13:24.952 "seek_hole": false, 00:13:24.952 "seek_data": false, 00:13:24.952 "copy": false, 00:13:24.952 "nvme_iov_md": false 00:13:24.952 }, 00:13:24.952 "memory_domains": [ 00:13:24.952 { 00:13:24.952 "dma_device_id": "system", 00:13:24.952 "dma_device_type": 1 00:13:24.952 }, 00:13:24.952 { 00:13:24.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.952 "dma_device_type": 2 00:13:24.952 }, 00:13:24.952 { 00:13:24.952 "dma_device_id": "system", 00:13:24.952 "dma_device_type": 1 00:13:24.952 }, 00:13:24.952 { 00:13:24.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.952 "dma_device_type": 2 00:13:24.952 } 00:13:24.952 ], 00:13:24.952 "driver_specific": { 00:13:24.952 "raid": { 00:13:24.952 "uuid": "fe561237-4d9f-484c-8304-288cdb5d100f", 00:13:24.952 "strip_size_kb": 0, 00:13:24.952 "state": "online", 00:13:24.952 "raid_level": "raid1", 00:13:24.952 "superblock": true, 00:13:24.952 "num_base_bdevs": 2, 00:13:24.952 "num_base_bdevs_discovered": 2, 00:13:24.952 "num_base_bdevs_operational": 2, 00:13:24.952 "base_bdevs_list": [ 00:13:24.952 { 00:13:24.952 "name": "pt1", 00:13:24.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:24.952 "is_configured": true, 00:13:24.952 "data_offset": 2048, 00:13:24.952 "data_size": 63488 00:13:24.952 }, 00:13:24.952 { 00:13:24.952 "name": "pt2", 00:13:24.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:24.952 "is_configured": true, 00:13:24.952 "data_offset": 2048, 00:13:24.952 "data_size": 63488 00:13:24.952 } 00:13:24.952 ] 00:13:24.952 } 00:13:24.952 } 00:13:24.952 }' 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:24.952 pt2' 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.952 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:24.953 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.953 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.953 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.953 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:25.211 [2024-11-06 09:06:01.513969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fe561237-4d9f-484c-8304-288cdb5d100f 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fe561237-4d9f-484c-8304-288cdb5d100f ']' 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.211 [2024-11-06 09:06:01.569563] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.211 [2024-11-06 09:06:01.569593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.211 [2024-11-06 09:06:01.569716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.211 [2024-11-06 09:06:01.569787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.211 [2024-11-06 09:06:01.569822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.211 [2024-11-06 09:06:01.717703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:25.211 [2024-11-06 09:06:01.720208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:25.211 [2024-11-06 09:06:01.720302] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:25.211 [2024-11-06 09:06:01.720377] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:25.211 [2024-11-06 09:06:01.720403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:25.211 [2024-11-06 09:06:01.720418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:25.211 request: 00:13:25.211 { 00:13:25.211 "name": "raid_bdev1", 00:13:25.211 "raid_level": "raid1", 00:13:25.211 "base_bdevs": [ 00:13:25.211 "malloc1", 00:13:25.211 "malloc2" 00:13:25.211 ], 00:13:25.211 "superblock": false, 00:13:25.211 "method": "bdev_raid_create", 00:13:25.211 "req_id": 1 00:13:25.211 } 00:13:25.211 Got JSON-RPC error response 00:13:25.211 response: 00:13:25.211 { 00:13:25.211 "code": -17, 00:13:25.211 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:25.211 } 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:25.211 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.470 [2024-11-06 09:06:01.785659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:25.470 [2024-11-06 09:06:01.785742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.470 [2024-11-06 09:06:01.785767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:25.470 [2024-11-06 09:06:01.785785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.470 [2024-11-06 09:06:01.788607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.470 [2024-11-06 09:06:01.788668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:25.470 [2024-11-06 09:06:01.788765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:25.470 [2024-11-06 09:06:01.788858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:25.470 pt1 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.470 "name": "raid_bdev1", 00:13:25.470 "uuid": "fe561237-4d9f-484c-8304-288cdb5d100f", 00:13:25.470 "strip_size_kb": 0, 00:13:25.470 "state": "configuring", 00:13:25.470 "raid_level": "raid1", 00:13:25.470 "superblock": true, 00:13:25.470 "num_base_bdevs": 2, 00:13:25.470 "num_base_bdevs_discovered": 1, 00:13:25.470 "num_base_bdevs_operational": 2, 00:13:25.470 "base_bdevs_list": [ 00:13:25.470 { 00:13:25.470 "name": "pt1", 00:13:25.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:25.470 "is_configured": true, 00:13:25.470 "data_offset": 2048, 00:13:25.470 "data_size": 63488 00:13:25.470 }, 00:13:25.470 { 00:13:25.470 "name": null, 00:13:25.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:25.470 "is_configured": false, 00:13:25.470 "data_offset": 2048, 00:13:25.470 "data_size": 63488 00:13:25.470 } 00:13:25.470 ] 00:13:25.470 }' 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.470 09:06:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.037 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:13:26.037 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:26.037 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.037 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:26.037 09:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.037 09:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.037 [2024-11-06 09:06:02.313968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:26.037 [2024-11-06 09:06:02.314055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.037 [2024-11-06 09:06:02.314086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:26.037 [2024-11-06 09:06:02.314105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.037 [2024-11-06 09:06:02.314699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.037 [2024-11-06 09:06:02.314740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:26.037 [2024-11-06 09:06:02.314873] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:26.037 [2024-11-06 09:06:02.314909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:26.037 [2024-11-06 09:06:02.315060] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:26.037 [2024-11-06 09:06:02.315100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:26.037 [2024-11-06 09:06:02.315412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:26.037 [2024-11-06 09:06:02.315618] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:26.037 [2024-11-06 09:06:02.315635] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:26.037 [2024-11-06 09:06:02.315803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.037 pt2 00:13:26.037 09:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.037 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.038 "name": "raid_bdev1", 00:13:26.038 "uuid": "fe561237-4d9f-484c-8304-288cdb5d100f", 00:13:26.038 "strip_size_kb": 0, 00:13:26.038 "state": "online", 00:13:26.038 "raid_level": "raid1", 00:13:26.038 "superblock": true, 00:13:26.038 "num_base_bdevs": 2, 00:13:26.038 "num_base_bdevs_discovered": 2, 00:13:26.038 "num_base_bdevs_operational": 2, 00:13:26.038 "base_bdevs_list": [ 00:13:26.038 { 00:13:26.038 "name": "pt1", 00:13:26.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.038 "is_configured": true, 00:13:26.038 "data_offset": 2048, 00:13:26.038 "data_size": 63488 00:13:26.038 }, 00:13:26.038 { 00:13:26.038 "name": "pt2", 00:13:26.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.038 "is_configured": true, 00:13:26.038 "data_offset": 2048, 00:13:26.038 "data_size": 63488 00:13:26.038 } 00:13:26.038 ] 00:13:26.038 }' 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.038 09:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.606 [2024-11-06 09:06:02.850458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:26.606 "name": "raid_bdev1", 00:13:26.606 "aliases": [ 00:13:26.606 "fe561237-4d9f-484c-8304-288cdb5d100f" 00:13:26.606 ], 00:13:26.606 "product_name": "Raid Volume", 00:13:26.606 "block_size": 512, 00:13:26.606 "num_blocks": 63488, 00:13:26.606 "uuid": "fe561237-4d9f-484c-8304-288cdb5d100f", 00:13:26.606 "assigned_rate_limits": { 00:13:26.606 "rw_ios_per_sec": 0, 00:13:26.606 "rw_mbytes_per_sec": 0, 00:13:26.606 "r_mbytes_per_sec": 0, 00:13:26.606 "w_mbytes_per_sec": 0 00:13:26.606 }, 00:13:26.606 "claimed": false, 00:13:26.606 "zoned": false, 00:13:26.606 "supported_io_types": { 00:13:26.606 "read": true, 00:13:26.606 "write": true, 00:13:26.606 "unmap": false, 00:13:26.606 "flush": false, 00:13:26.606 "reset": true, 00:13:26.606 "nvme_admin": false, 00:13:26.606 "nvme_io": false, 00:13:26.606 "nvme_io_md": false, 00:13:26.606 "write_zeroes": true, 00:13:26.606 "zcopy": false, 00:13:26.606 "get_zone_info": false, 00:13:26.606 "zone_management": false, 00:13:26.606 "zone_append": false, 00:13:26.606 "compare": false, 00:13:26.606 "compare_and_write": false, 00:13:26.606 "abort": false, 00:13:26.606 "seek_hole": false, 00:13:26.606 "seek_data": false, 00:13:26.606 "copy": false, 00:13:26.606 "nvme_iov_md": false 00:13:26.606 }, 00:13:26.606 "memory_domains": [ 00:13:26.606 { 00:13:26.606 "dma_device_id": "system", 00:13:26.606 "dma_device_type": 1 00:13:26.606 }, 00:13:26.606 { 00:13:26.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.606 "dma_device_type": 2 00:13:26.606 }, 00:13:26.606 { 00:13:26.606 "dma_device_id": "system", 00:13:26.606 "dma_device_type": 1 00:13:26.606 }, 00:13:26.606 { 00:13:26.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.606 "dma_device_type": 2 00:13:26.606 } 00:13:26.606 ], 00:13:26.606 "driver_specific": { 00:13:26.606 "raid": { 00:13:26.606 "uuid": "fe561237-4d9f-484c-8304-288cdb5d100f", 00:13:26.606 "strip_size_kb": 0, 00:13:26.606 "state": "online", 00:13:26.606 "raid_level": "raid1", 00:13:26.606 "superblock": true, 00:13:26.606 "num_base_bdevs": 2, 00:13:26.606 "num_base_bdevs_discovered": 2, 00:13:26.606 "num_base_bdevs_operational": 2, 00:13:26.606 "base_bdevs_list": [ 00:13:26.606 { 00:13:26.606 "name": "pt1", 00:13:26.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:26.606 "is_configured": true, 00:13:26.606 "data_offset": 2048, 00:13:26.606 "data_size": 63488 00:13:26.606 }, 00:13:26.606 { 00:13:26.606 "name": "pt2", 00:13:26.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.606 "is_configured": true, 00:13:26.606 "data_offset": 2048, 00:13:26.606 "data_size": 63488 00:13:26.606 } 00:13:26.606 ] 00:13:26.606 } 00:13:26.606 } 00:13:26.606 }' 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:26.606 pt2' 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.606 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:26.607 09:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:26.607 [2024-11-06 09:06:03.118502] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:26.607 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fe561237-4d9f-484c-8304-288cdb5d100f '!=' fe561237-4d9f-484c-8304-288cdb5d100f ']' 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.866 [2024-11-06 09:06:03.174271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.866 "name": "raid_bdev1", 00:13:26.866 "uuid": "fe561237-4d9f-484c-8304-288cdb5d100f", 00:13:26.866 "strip_size_kb": 0, 00:13:26.866 "state": "online", 00:13:26.866 "raid_level": "raid1", 00:13:26.866 "superblock": true, 00:13:26.866 "num_base_bdevs": 2, 00:13:26.866 "num_base_bdevs_discovered": 1, 00:13:26.866 "num_base_bdevs_operational": 1, 00:13:26.866 "base_bdevs_list": [ 00:13:26.866 { 00:13:26.866 "name": null, 00:13:26.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.866 "is_configured": false, 00:13:26.866 "data_offset": 0, 00:13:26.866 "data_size": 63488 00:13:26.866 }, 00:13:26.866 { 00:13:26.866 "name": "pt2", 00:13:26.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:26.866 "is_configured": true, 00:13:26.866 "data_offset": 2048, 00:13:26.866 "data_size": 63488 00:13:26.866 } 00:13:26.866 ] 00:13:26.866 }' 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.866 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.435 [2024-11-06 09:06:03.686360] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:27.435 [2024-11-06 09:06:03.686410] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.435 [2024-11-06 09:06:03.686513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.435 [2024-11-06 09:06:03.686609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.435 [2024-11-06 09:06:03.686630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.435 [2024-11-06 09:06:03.754325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:27.435 [2024-11-06 09:06:03.754405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.435 [2024-11-06 09:06:03.754438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:27.435 [2024-11-06 09:06:03.754458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.435 [2024-11-06 09:06:03.757394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.435 [2024-11-06 09:06:03.757455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:27.435 [2024-11-06 09:06:03.757577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:27.435 [2024-11-06 09:06:03.757634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:27.435 [2024-11-06 09:06:03.757765] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:27.435 [2024-11-06 09:06:03.757786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:27.435 [2024-11-06 09:06:03.758114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:27.435 [2024-11-06 09:06:03.758301] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:27.435 [2024-11-06 09:06:03.758317] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:27.435 pt2 00:13:27.435 [2024-11-06 09:06:03.758592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.435 "name": "raid_bdev1", 00:13:27.435 "uuid": "fe561237-4d9f-484c-8304-288cdb5d100f", 00:13:27.435 "strip_size_kb": 0, 00:13:27.435 "state": "online", 00:13:27.435 "raid_level": "raid1", 00:13:27.435 "superblock": true, 00:13:27.435 "num_base_bdevs": 2, 00:13:27.435 "num_base_bdevs_discovered": 1, 00:13:27.435 "num_base_bdevs_operational": 1, 00:13:27.435 "base_bdevs_list": [ 00:13:27.435 { 00:13:27.435 "name": null, 00:13:27.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.435 "is_configured": false, 00:13:27.435 "data_offset": 2048, 00:13:27.435 "data_size": 63488 00:13:27.435 }, 00:13:27.435 { 00:13:27.435 "name": "pt2", 00:13:27.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:27.435 "is_configured": true, 00:13:27.435 "data_offset": 2048, 00:13:27.435 "data_size": 63488 00:13:27.435 } 00:13:27.435 ] 00:13:27.435 }' 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.435 09:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.003 [2024-11-06 09:06:04.270693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.003 [2024-11-06 09:06:04.270762] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.003 [2024-11-06 09:06:04.270879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.003 [2024-11-06 09:06:04.270949] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.003 [2024-11-06 09:06:04.270964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.003 [2024-11-06 09:06:04.330730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:28.003 [2024-11-06 09:06:04.330795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.003 [2024-11-06 09:06:04.330835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:28.003 [2024-11-06 09:06:04.330854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.003 [2024-11-06 09:06:04.333785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.003 [2024-11-06 09:06:04.333840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:28.003 [2024-11-06 09:06:04.333958] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:28.003 [2024-11-06 09:06:04.334022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:28.003 [2024-11-06 09:06:04.334184] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:28.003 [2024-11-06 09:06:04.334201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.003 [2024-11-06 09:06:04.334222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:28.003 [2024-11-06 09:06:04.334290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:28.003 [2024-11-06 09:06:04.334390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:28.003 [2024-11-06 09:06:04.334405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:28.003 [2024-11-06 09:06:04.334722] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:28.003 [2024-11-06 09:06:04.334930] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:28.003 [2024-11-06 09:06:04.334952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:28.003 [2024-11-06 09:06:04.335176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.003 pt1 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.003 "name": "raid_bdev1", 00:13:28.003 "uuid": "fe561237-4d9f-484c-8304-288cdb5d100f", 00:13:28.003 "strip_size_kb": 0, 00:13:28.003 "state": "online", 00:13:28.003 "raid_level": "raid1", 00:13:28.003 "superblock": true, 00:13:28.003 "num_base_bdevs": 2, 00:13:28.003 "num_base_bdevs_discovered": 1, 00:13:28.003 "num_base_bdevs_operational": 1, 00:13:28.003 "base_bdevs_list": [ 00:13:28.003 { 00:13:28.003 "name": null, 00:13:28.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.003 "is_configured": false, 00:13:28.003 "data_offset": 2048, 00:13:28.003 "data_size": 63488 00:13:28.003 }, 00:13:28.003 { 00:13:28.003 "name": "pt2", 00:13:28.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:28.003 "is_configured": true, 00:13:28.003 "data_offset": 2048, 00:13:28.003 "data_size": 63488 00:13:28.003 } 00:13:28.003 ] 00:13:28.003 }' 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.003 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.571 [2024-11-06 09:06:04.923647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fe561237-4d9f-484c-8304-288cdb5d100f '!=' fe561237-4d9f-484c-8304-288cdb5d100f ']' 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63107 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 63107 ']' 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 63107 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:28.571 09:06:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63107 00:13:28.571 09:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:28.571 killing process with pid 63107 00:13:28.571 09:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:28.571 09:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63107' 00:13:28.571 09:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 63107 00:13:28.571 [2024-11-06 09:06:05.002523] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:28.571 09:06:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 63107 00:13:28.571 [2024-11-06 09:06:05.002632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.571 [2024-11-06 09:06:05.002696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.571 [2024-11-06 09:06:05.002728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:28.830 [2024-11-06 09:06:05.199107] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.818 09:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:29.818 00:13:29.818 real 0m6.733s 00:13:29.818 user 0m10.660s 00:13:29.818 sys 0m0.980s 00:13:29.818 09:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:29.818 ************************************ 00:13:29.818 END TEST raid_superblock_test 00:13:29.818 ************************************ 00:13:29.818 09:06:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.818 09:06:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:13:29.818 09:06:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:29.818 09:06:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:29.818 09:06:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.818 ************************************ 00:13:29.818 START TEST raid_read_error_test 00:13:29.818 ************************************ 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ex3nsQtlxR 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63437 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63437 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 63437 ']' 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:29.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:29.818 09:06:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.076 [2024-11-06 09:06:06.416197] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:13:30.076 [2024-11-06 09:06:06.416390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63437 ] 00:13:30.076 [2024-11-06 09:06:06.604301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.334 [2024-11-06 09:06:06.739944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.593 [2024-11-06 09:06:06.954001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:30.593 [2024-11-06 09:06:06.954082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.161 BaseBdev1_malloc 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.161 true 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.161 [2024-11-06 09:06:07.478590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:31.161 [2024-11-06 09:06:07.478685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.161 [2024-11-06 09:06:07.478718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:31.161 [2024-11-06 09:06:07.478736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.161 [2024-11-06 09:06:07.481660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.161 [2024-11-06 09:06:07.481751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:31.161 BaseBdev1 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.161 BaseBdev2_malloc 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.161 true 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.161 [2024-11-06 09:06:07.545126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:31.161 [2024-11-06 09:06:07.545242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.161 [2024-11-06 09:06:07.545267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:31.161 [2024-11-06 09:06:07.545285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.161 [2024-11-06 09:06:07.548334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.161 [2024-11-06 09:06:07.548396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:31.161 BaseBdev2 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.161 [2024-11-06 09:06:07.557384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.161 [2024-11-06 09:06:07.560194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.161 [2024-11-06 09:06:07.560448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:31.161 [2024-11-06 09:06:07.560472] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:31.161 [2024-11-06 09:06:07.560835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:31.161 [2024-11-06 09:06:07.561098] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:31.161 [2024-11-06 09:06:07.561114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:31.161 [2024-11-06 09:06:07.561351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.161 "name": "raid_bdev1", 00:13:31.161 "uuid": "e50e9725-aaaa-4d12-be41-356420a83d69", 00:13:31.161 "strip_size_kb": 0, 00:13:31.161 "state": "online", 00:13:31.161 "raid_level": "raid1", 00:13:31.161 "superblock": true, 00:13:31.161 "num_base_bdevs": 2, 00:13:31.161 "num_base_bdevs_discovered": 2, 00:13:31.161 "num_base_bdevs_operational": 2, 00:13:31.161 "base_bdevs_list": [ 00:13:31.161 { 00:13:31.161 "name": "BaseBdev1", 00:13:31.161 "uuid": "f5eb4398-18b3-51dd-bd0b-ac334ee99bbe", 00:13:31.161 "is_configured": true, 00:13:31.161 "data_offset": 2048, 00:13:31.161 "data_size": 63488 00:13:31.161 }, 00:13:31.161 { 00:13:31.161 "name": "BaseBdev2", 00:13:31.161 "uuid": "86d7c835-abeb-5c39-bb04-2adc8d9e59a7", 00:13:31.161 "is_configured": true, 00:13:31.161 "data_offset": 2048, 00:13:31.161 "data_size": 63488 00:13:31.161 } 00:13:31.161 ] 00:13:31.161 }' 00:13:31.161 09:06:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.162 09:06:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.727 09:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:31.727 09:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:31.727 [2024-11-06 09:06:08.202948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:32.662 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:32.662 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.662 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.662 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.662 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:32.662 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:32.662 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:32.662 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:13:32.662 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:32.662 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.663 "name": "raid_bdev1", 00:13:32.663 "uuid": "e50e9725-aaaa-4d12-be41-356420a83d69", 00:13:32.663 "strip_size_kb": 0, 00:13:32.663 "state": "online", 00:13:32.663 "raid_level": "raid1", 00:13:32.663 "superblock": true, 00:13:32.663 "num_base_bdevs": 2, 00:13:32.663 "num_base_bdevs_discovered": 2, 00:13:32.663 "num_base_bdevs_operational": 2, 00:13:32.663 "base_bdevs_list": [ 00:13:32.663 { 00:13:32.663 "name": "BaseBdev1", 00:13:32.663 "uuid": "f5eb4398-18b3-51dd-bd0b-ac334ee99bbe", 00:13:32.663 "is_configured": true, 00:13:32.663 "data_offset": 2048, 00:13:32.663 "data_size": 63488 00:13:32.663 }, 00:13:32.663 { 00:13:32.663 "name": "BaseBdev2", 00:13:32.663 "uuid": "86d7c835-abeb-5c39-bb04-2adc8d9e59a7", 00:13:32.663 "is_configured": true, 00:13:32.663 "data_offset": 2048, 00:13:32.663 "data_size": 63488 00:13:32.663 } 00:13:32.663 ] 00:13:32.663 }' 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.663 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.229 [2024-11-06 09:06:09.642646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:33.229 [2024-11-06 09:06:09.642922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.229 [2024-11-06 09:06:09.646589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.229 [2024-11-06 09:06:09.646846] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.229 [2024-11-06 09:06:09.647084] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:13:33.229 "results": [ 00:13:33.229 { 00:13:33.229 "job": "raid_bdev1", 00:13:33.229 "core_mask": "0x1", 00:13:33.229 "workload": "randrw", 00:13:33.229 "percentage": 50, 00:13:33.229 "status": "finished", 00:13:33.229 "queue_depth": 1, 00:13:33.229 "io_size": 131072, 00:13:33.229 "runtime": 1.436689, 00:13:33.229 "iops": 12135.542208508592, 00:13:33.229 "mibps": 1516.942776063574, 00:13:33.229 "io_failed": 0, 00:13:33.229 "io_timeout": 0, 00:13:33.229 "avg_latency_us": 77.94064395025678, 00:13:33.229 "min_latency_us": 38.4, 00:13:33.229 "max_latency_us": 1906.5018181818182 00:13:33.229 } 00:13:33.229 ], 00:13:33.229 "core_count": 1 00:13:33.229 } 00:13:33.229 ee all in destruct 00:13:33.229 [2024-11-06 09:06:09.647354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63437 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 63437 ']' 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 63437 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63437 00:13:33.229 killing process with pid 63437 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63437' 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 63437 00:13:33.229 [2024-11-06 09:06:09.690719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.229 09:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 63437 00:13:33.488 [2024-11-06 09:06:09.817337] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.421 09:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ex3nsQtlxR 00:13:34.421 09:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:34.421 09:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:34.421 09:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:34.421 09:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:34.421 09:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:34.421 09:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:34.421 09:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:34.421 00:13:34.421 real 0m4.648s 00:13:34.421 user 0m5.811s 00:13:34.421 sys 0m0.599s 00:13:34.421 09:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:34.421 09:06:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.421 ************************************ 00:13:34.421 END TEST raid_read_error_test 00:13:34.421 ************************************ 00:13:34.681 09:06:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:13:34.681 09:06:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:34.681 09:06:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:34.681 09:06:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.681 ************************************ 00:13:34.681 START TEST raid_write_error_test 00:13:34.681 ************************************ 00:13:34.681 09:06:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:13:34.681 09:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:34.681 09:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:13:34.681 09:06:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1t80PTRCwX 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63587 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63587 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 63587 ']' 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:34.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:34.681 09:06:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.681 [2024-11-06 09:06:11.131789] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:13:34.681 [2024-11-06 09:06:11.132718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63587 ] 00:13:34.971 [2024-11-06 09:06:11.326024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.971 [2024-11-06 09:06:11.460546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.243 [2024-11-06 09:06:11.673304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.243 [2024-11-06 09:06:11.673608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.810 BaseBdev1_malloc 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.810 true 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.810 [2024-11-06 09:06:12.216863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:35.810 [2024-11-06 09:06:12.216945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.810 [2024-11-06 09:06:12.216974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:35.810 [2024-11-06 09:06:12.216993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.810 [2024-11-06 09:06:12.219868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.810 [2024-11-06 09:06:12.220086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.810 BaseBdev1 00:13:35.810 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.811 BaseBdev2_malloc 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.811 true 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.811 [2024-11-06 09:06:12.278907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:35.811 [2024-11-06 09:06:12.279117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.811 [2024-11-06 09:06:12.279153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:35.811 [2024-11-06 09:06:12.279172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.811 [2024-11-06 09:06:12.282106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.811 [2024-11-06 09:06:12.282273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:35.811 BaseBdev2 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.811 [2024-11-06 09:06:12.287165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.811 [2024-11-06 09:06:12.289647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.811 [2024-11-06 09:06:12.289915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:35.811 [2024-11-06 09:06:12.289969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.811 [2024-11-06 09:06:12.290272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:35.811 [2024-11-06 09:06:12.290534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:35.811 [2024-11-06 09:06:12.290550] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:35.811 [2024-11-06 09:06:12.290720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.811 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.069 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.069 "name": "raid_bdev1", 00:13:36.069 "uuid": "fdac600c-05d0-4ecf-afa5-bffbff792d18", 00:13:36.069 "strip_size_kb": 0, 00:13:36.069 "state": "online", 00:13:36.069 "raid_level": "raid1", 00:13:36.069 "superblock": true, 00:13:36.069 "num_base_bdevs": 2, 00:13:36.069 "num_base_bdevs_discovered": 2, 00:13:36.069 "num_base_bdevs_operational": 2, 00:13:36.069 "base_bdevs_list": [ 00:13:36.069 { 00:13:36.069 "name": "BaseBdev1", 00:13:36.069 "uuid": "f53e87c5-d5d0-5376-9ff4-7c2df07fa573", 00:13:36.069 "is_configured": true, 00:13:36.069 "data_offset": 2048, 00:13:36.069 "data_size": 63488 00:13:36.069 }, 00:13:36.069 { 00:13:36.069 "name": "BaseBdev2", 00:13:36.069 "uuid": "c8a8e395-09cc-53a5-bac3-457ee37e61a7", 00:13:36.069 "is_configured": true, 00:13:36.069 "data_offset": 2048, 00:13:36.069 "data_size": 63488 00:13:36.069 } 00:13:36.069 ] 00:13:36.069 }' 00:13:36.069 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.069 09:06:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.328 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:36.328 09:06:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:36.587 [2024-11-06 09:06:12.956750] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.523 [2024-11-06 09:06:13.809718] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:37.523 [2024-11-06 09:06:13.809785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:37.523 [2024-11-06 09:06:13.810045] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.523 "name": "raid_bdev1", 00:13:37.523 "uuid": "fdac600c-05d0-4ecf-afa5-bffbff792d18", 00:13:37.523 "strip_size_kb": 0, 00:13:37.523 "state": "online", 00:13:37.523 "raid_level": "raid1", 00:13:37.523 "superblock": true, 00:13:37.523 "num_base_bdevs": 2, 00:13:37.523 "num_base_bdevs_discovered": 1, 00:13:37.523 "num_base_bdevs_operational": 1, 00:13:37.523 "base_bdevs_list": [ 00:13:37.523 { 00:13:37.523 "name": null, 00:13:37.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.523 "is_configured": false, 00:13:37.523 "data_offset": 0, 00:13:37.523 "data_size": 63488 00:13:37.523 }, 00:13:37.523 { 00:13:37.523 "name": "BaseBdev2", 00:13:37.523 "uuid": "c8a8e395-09cc-53a5-bac3-457ee37e61a7", 00:13:37.523 "is_configured": true, 00:13:37.523 "data_offset": 2048, 00:13:37.523 "data_size": 63488 00:13:37.523 } 00:13:37.523 ] 00:13:37.523 }' 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.523 09:06:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.089 [2024-11-06 09:06:14.349503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:38.089 [2024-11-06 09:06:14.349541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.089 [2024-11-06 09:06:14.352968] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.089 [2024-11-06 09:06:14.353020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.089 [2024-11-06 09:06:14.353103] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.089 [2024-11-06 09:06:14.353118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:38.089 { 00:13:38.089 "results": [ 00:13:38.089 { 00:13:38.089 "job": "raid_bdev1", 00:13:38.089 "core_mask": "0x1", 00:13:38.089 "workload": "randrw", 00:13:38.089 "percentage": 50, 00:13:38.089 "status": "finished", 00:13:38.089 "queue_depth": 1, 00:13:38.089 "io_size": 131072, 00:13:38.089 "runtime": 1.390271, 00:13:38.089 "iops": 14423.806581594523, 00:13:38.089 "mibps": 1802.9758226993154, 00:13:38.089 "io_failed": 0, 00:13:38.089 "io_timeout": 0, 00:13:38.089 "avg_latency_us": 65.04940815928697, 00:13:38.089 "min_latency_us": 37.93454545454546, 00:13:38.089 "max_latency_us": 1750.1090909090908 00:13:38.089 } 00:13:38.089 ], 00:13:38.089 "core_count": 1 00:13:38.089 } 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63587 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 63587 ']' 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 63587 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63587 00:13:38.089 killing process with pid 63587 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:38.089 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63587' 00:13:38.090 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 63587 00:13:38.090 09:06:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 63587 00:13:38.090 [2024-11-06 09:06:14.392366] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.090 [2024-11-06 09:06:14.512067] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.466 09:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1t80PTRCwX 00:13:39.466 09:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:39.466 09:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:39.467 09:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:39.467 09:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:39.467 09:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:39.467 09:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:39.467 09:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:39.467 00:13:39.467 real 0m4.609s 00:13:39.467 user 0m5.859s 00:13:39.467 sys 0m0.559s 00:13:39.467 ************************************ 00:13:39.467 END TEST raid_write_error_test 00:13:39.467 ************************************ 00:13:39.467 09:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:39.467 09:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.467 09:06:15 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:39.467 09:06:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:39.467 09:06:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:13:39.467 09:06:15 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:39.467 09:06:15 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:39.467 09:06:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.467 ************************************ 00:13:39.467 START TEST raid_state_function_test 00:13:39.467 ************************************ 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63726 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63726' 00:13:39.467 Process raid pid: 63726 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63726 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 63726 ']' 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:39.467 09:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.467 [2024-11-06 09:06:15.803753] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:13:39.467 [2024-11-06 09:06:15.804171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.467 [2024-11-06 09:06:15.990227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.726 [2024-11-06 09:06:16.124605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.985 [2024-11-06 09:06:16.335020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.985 [2024-11-06 09:06:16.335059] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.245 [2024-11-06 09:06:16.758125] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.245 [2024-11-06 09:06:16.758191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.245 [2024-11-06 09:06:16.758209] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.245 [2024-11-06 09:06:16.758225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.245 [2024-11-06 09:06:16.758236] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.245 [2024-11-06 09:06:16.758250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.245 09:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.504 09:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.504 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.504 "name": "Existed_Raid", 00:13:40.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.504 "strip_size_kb": 64, 00:13:40.504 "state": "configuring", 00:13:40.504 "raid_level": "raid0", 00:13:40.504 "superblock": false, 00:13:40.504 "num_base_bdevs": 3, 00:13:40.504 "num_base_bdevs_discovered": 0, 00:13:40.504 "num_base_bdevs_operational": 3, 00:13:40.504 "base_bdevs_list": [ 00:13:40.504 { 00:13:40.504 "name": "BaseBdev1", 00:13:40.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.504 "is_configured": false, 00:13:40.504 "data_offset": 0, 00:13:40.504 "data_size": 0 00:13:40.504 }, 00:13:40.504 { 00:13:40.504 "name": "BaseBdev2", 00:13:40.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.504 "is_configured": false, 00:13:40.504 "data_offset": 0, 00:13:40.504 "data_size": 0 00:13:40.504 }, 00:13:40.504 { 00:13:40.504 "name": "BaseBdev3", 00:13:40.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.504 "is_configured": false, 00:13:40.504 "data_offset": 0, 00:13:40.504 "data_size": 0 00:13:40.504 } 00:13:40.504 ] 00:13:40.504 }' 00:13:40.504 09:06:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.504 09:06:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.769 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:40.769 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.769 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.769 [2024-11-06 09:06:17.270374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:40.769 [2024-11-06 09:06:17.270418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:40.769 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.769 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:40.769 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.769 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.769 [2024-11-06 09:06:17.278311] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:40.769 [2024-11-06 09:06:17.278365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:40.769 [2024-11-06 09:06:17.278382] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:40.769 [2024-11-06 09:06:17.278398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:40.770 [2024-11-06 09:06:17.278407] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:40.770 [2024-11-06 09:06:17.278422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:40.770 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.770 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:40.770 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.770 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.040 [2024-11-06 09:06:17.324407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.040 BaseBdev1 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.040 [ 00:13:41.040 { 00:13:41.040 "name": "BaseBdev1", 00:13:41.040 "aliases": [ 00:13:41.040 "d70fcb6b-559c-4da5-8aaf-60d2649573f9" 00:13:41.040 ], 00:13:41.040 "product_name": "Malloc disk", 00:13:41.040 "block_size": 512, 00:13:41.040 "num_blocks": 65536, 00:13:41.040 "uuid": "d70fcb6b-559c-4da5-8aaf-60d2649573f9", 00:13:41.040 "assigned_rate_limits": { 00:13:41.040 "rw_ios_per_sec": 0, 00:13:41.040 "rw_mbytes_per_sec": 0, 00:13:41.040 "r_mbytes_per_sec": 0, 00:13:41.040 "w_mbytes_per_sec": 0 00:13:41.040 }, 00:13:41.040 "claimed": true, 00:13:41.040 "claim_type": "exclusive_write", 00:13:41.040 "zoned": false, 00:13:41.040 "supported_io_types": { 00:13:41.040 "read": true, 00:13:41.040 "write": true, 00:13:41.040 "unmap": true, 00:13:41.040 "flush": true, 00:13:41.040 "reset": true, 00:13:41.040 "nvme_admin": false, 00:13:41.040 "nvme_io": false, 00:13:41.040 "nvme_io_md": false, 00:13:41.040 "write_zeroes": true, 00:13:41.040 "zcopy": true, 00:13:41.040 "get_zone_info": false, 00:13:41.040 "zone_management": false, 00:13:41.040 "zone_append": false, 00:13:41.040 "compare": false, 00:13:41.040 "compare_and_write": false, 00:13:41.040 "abort": true, 00:13:41.040 "seek_hole": false, 00:13:41.040 "seek_data": false, 00:13:41.040 "copy": true, 00:13:41.040 "nvme_iov_md": false 00:13:41.040 }, 00:13:41.040 "memory_domains": [ 00:13:41.040 { 00:13:41.040 "dma_device_id": "system", 00:13:41.040 "dma_device_type": 1 00:13:41.040 }, 00:13:41.040 { 00:13:41.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.040 "dma_device_type": 2 00:13:41.040 } 00:13:41.040 ], 00:13:41.040 "driver_specific": {} 00:13:41.040 } 00:13:41.040 ] 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.040 "name": "Existed_Raid", 00:13:41.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.040 "strip_size_kb": 64, 00:13:41.040 "state": "configuring", 00:13:41.040 "raid_level": "raid0", 00:13:41.040 "superblock": false, 00:13:41.040 "num_base_bdevs": 3, 00:13:41.040 "num_base_bdevs_discovered": 1, 00:13:41.040 "num_base_bdevs_operational": 3, 00:13:41.040 "base_bdevs_list": [ 00:13:41.040 { 00:13:41.040 "name": "BaseBdev1", 00:13:41.040 "uuid": "d70fcb6b-559c-4da5-8aaf-60d2649573f9", 00:13:41.040 "is_configured": true, 00:13:41.040 "data_offset": 0, 00:13:41.040 "data_size": 65536 00:13:41.040 }, 00:13:41.040 { 00:13:41.040 "name": "BaseBdev2", 00:13:41.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.040 "is_configured": false, 00:13:41.040 "data_offset": 0, 00:13:41.040 "data_size": 0 00:13:41.040 }, 00:13:41.040 { 00:13:41.040 "name": "BaseBdev3", 00:13:41.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.040 "is_configured": false, 00:13:41.040 "data_offset": 0, 00:13:41.040 "data_size": 0 00:13:41.040 } 00:13:41.040 ] 00:13:41.040 }' 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.040 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.613 [2024-11-06 09:06:17.884707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:41.613 [2024-11-06 09:06:17.884923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.613 [2024-11-06 09:06:17.892744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.613 [2024-11-06 09:06:17.895418] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:41.613 [2024-11-06 09:06:17.895476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:41.613 [2024-11-06 09:06:17.895492] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:41.613 [2024-11-06 09:06:17.895524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.613 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.613 "name": "Existed_Raid", 00:13:41.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.613 "strip_size_kb": 64, 00:13:41.613 "state": "configuring", 00:13:41.613 "raid_level": "raid0", 00:13:41.613 "superblock": false, 00:13:41.614 "num_base_bdevs": 3, 00:13:41.614 "num_base_bdevs_discovered": 1, 00:13:41.614 "num_base_bdevs_operational": 3, 00:13:41.614 "base_bdevs_list": [ 00:13:41.614 { 00:13:41.614 "name": "BaseBdev1", 00:13:41.614 "uuid": "d70fcb6b-559c-4da5-8aaf-60d2649573f9", 00:13:41.614 "is_configured": true, 00:13:41.614 "data_offset": 0, 00:13:41.614 "data_size": 65536 00:13:41.614 }, 00:13:41.614 { 00:13:41.614 "name": "BaseBdev2", 00:13:41.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.614 "is_configured": false, 00:13:41.614 "data_offset": 0, 00:13:41.614 "data_size": 0 00:13:41.614 }, 00:13:41.614 { 00:13:41.614 "name": "BaseBdev3", 00:13:41.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.614 "is_configured": false, 00:13:41.614 "data_offset": 0, 00:13:41.614 "data_size": 0 00:13:41.614 } 00:13:41.614 ] 00:13:41.614 }' 00:13:41.614 09:06:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.614 09:06:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:42.198 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.198 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.198 [2024-11-06 09:06:18.460106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.198 BaseBdev2 00:13:42.198 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.198 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:42.198 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.199 [ 00:13:42.199 { 00:13:42.199 "name": "BaseBdev2", 00:13:42.199 "aliases": [ 00:13:42.199 "c0dca554-7c99-4ef1-a7cf-249bae9e5be3" 00:13:42.199 ], 00:13:42.199 "product_name": "Malloc disk", 00:13:42.199 "block_size": 512, 00:13:42.199 "num_blocks": 65536, 00:13:42.199 "uuid": "c0dca554-7c99-4ef1-a7cf-249bae9e5be3", 00:13:42.199 "assigned_rate_limits": { 00:13:42.199 "rw_ios_per_sec": 0, 00:13:42.199 "rw_mbytes_per_sec": 0, 00:13:42.199 "r_mbytes_per_sec": 0, 00:13:42.199 "w_mbytes_per_sec": 0 00:13:42.199 }, 00:13:42.199 "claimed": true, 00:13:42.199 "claim_type": "exclusive_write", 00:13:42.199 "zoned": false, 00:13:42.199 "supported_io_types": { 00:13:42.199 "read": true, 00:13:42.199 "write": true, 00:13:42.199 "unmap": true, 00:13:42.199 "flush": true, 00:13:42.199 "reset": true, 00:13:42.199 "nvme_admin": false, 00:13:42.199 "nvme_io": false, 00:13:42.199 "nvme_io_md": false, 00:13:42.199 "write_zeroes": true, 00:13:42.199 "zcopy": true, 00:13:42.199 "get_zone_info": false, 00:13:42.199 "zone_management": false, 00:13:42.199 "zone_append": false, 00:13:42.199 "compare": false, 00:13:42.199 "compare_and_write": false, 00:13:42.199 "abort": true, 00:13:42.199 "seek_hole": false, 00:13:42.199 "seek_data": false, 00:13:42.199 "copy": true, 00:13:42.199 "nvme_iov_md": false 00:13:42.199 }, 00:13:42.199 "memory_domains": [ 00:13:42.199 { 00:13:42.199 "dma_device_id": "system", 00:13:42.199 "dma_device_type": 1 00:13:42.199 }, 00:13:42.199 { 00:13:42.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.199 "dma_device_type": 2 00:13:42.199 } 00:13:42.199 ], 00:13:42.199 "driver_specific": {} 00:13:42.199 } 00:13:42.199 ] 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.199 "name": "Existed_Raid", 00:13:42.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.199 "strip_size_kb": 64, 00:13:42.199 "state": "configuring", 00:13:42.199 "raid_level": "raid0", 00:13:42.199 "superblock": false, 00:13:42.199 "num_base_bdevs": 3, 00:13:42.199 "num_base_bdevs_discovered": 2, 00:13:42.199 "num_base_bdevs_operational": 3, 00:13:42.199 "base_bdevs_list": [ 00:13:42.199 { 00:13:42.199 "name": "BaseBdev1", 00:13:42.199 "uuid": "d70fcb6b-559c-4da5-8aaf-60d2649573f9", 00:13:42.199 "is_configured": true, 00:13:42.199 "data_offset": 0, 00:13:42.199 "data_size": 65536 00:13:42.199 }, 00:13:42.199 { 00:13:42.199 "name": "BaseBdev2", 00:13:42.199 "uuid": "c0dca554-7c99-4ef1-a7cf-249bae9e5be3", 00:13:42.199 "is_configured": true, 00:13:42.199 "data_offset": 0, 00:13:42.199 "data_size": 65536 00:13:42.199 }, 00:13:42.199 { 00:13:42.199 "name": "BaseBdev3", 00:13:42.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.199 "is_configured": false, 00:13:42.199 "data_offset": 0, 00:13:42.199 "data_size": 0 00:13:42.199 } 00:13:42.199 ] 00:13:42.199 }' 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.199 09:06:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.796 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:42.796 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.796 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.796 [2024-11-06 09:06:19.074660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.796 [2024-11-06 09:06:19.074712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:42.796 [2024-11-06 09:06:19.074733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:42.796 [2024-11-06 09:06:19.075105] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:42.796 [2024-11-06 09:06:19.075320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:42.796 [2024-11-06 09:06:19.075337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:42.796 [2024-11-06 09:06:19.075634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.796 BaseBdev3 00:13:42.796 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.796 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:42.796 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:42.796 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:42.796 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:42.796 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:42.796 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.797 [ 00:13:42.797 { 00:13:42.797 "name": "BaseBdev3", 00:13:42.797 "aliases": [ 00:13:42.797 "ae19879d-2d64-4d27-a9b4-fd3283c6447c" 00:13:42.797 ], 00:13:42.797 "product_name": "Malloc disk", 00:13:42.797 "block_size": 512, 00:13:42.797 "num_blocks": 65536, 00:13:42.797 "uuid": "ae19879d-2d64-4d27-a9b4-fd3283c6447c", 00:13:42.797 "assigned_rate_limits": { 00:13:42.797 "rw_ios_per_sec": 0, 00:13:42.797 "rw_mbytes_per_sec": 0, 00:13:42.797 "r_mbytes_per_sec": 0, 00:13:42.797 "w_mbytes_per_sec": 0 00:13:42.797 }, 00:13:42.797 "claimed": true, 00:13:42.797 "claim_type": "exclusive_write", 00:13:42.797 "zoned": false, 00:13:42.797 "supported_io_types": { 00:13:42.797 "read": true, 00:13:42.797 "write": true, 00:13:42.797 "unmap": true, 00:13:42.797 "flush": true, 00:13:42.797 "reset": true, 00:13:42.797 "nvme_admin": false, 00:13:42.797 "nvme_io": false, 00:13:42.797 "nvme_io_md": false, 00:13:42.797 "write_zeroes": true, 00:13:42.797 "zcopy": true, 00:13:42.797 "get_zone_info": false, 00:13:42.797 "zone_management": false, 00:13:42.797 "zone_append": false, 00:13:42.797 "compare": false, 00:13:42.797 "compare_and_write": false, 00:13:42.797 "abort": true, 00:13:42.797 "seek_hole": false, 00:13:42.797 "seek_data": false, 00:13:42.797 "copy": true, 00:13:42.797 "nvme_iov_md": false 00:13:42.797 }, 00:13:42.797 "memory_domains": [ 00:13:42.797 { 00:13:42.797 "dma_device_id": "system", 00:13:42.797 "dma_device_type": 1 00:13:42.797 }, 00:13:42.797 { 00:13:42.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.797 "dma_device_type": 2 00:13:42.797 } 00:13:42.797 ], 00:13:42.797 "driver_specific": {} 00:13:42.797 } 00:13:42.797 ] 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.797 "name": "Existed_Raid", 00:13:42.797 "uuid": "e564a5a3-010e-4f79-97e1-59128d77b597", 00:13:42.797 "strip_size_kb": 64, 00:13:42.797 "state": "online", 00:13:42.797 "raid_level": "raid0", 00:13:42.797 "superblock": false, 00:13:42.797 "num_base_bdevs": 3, 00:13:42.797 "num_base_bdevs_discovered": 3, 00:13:42.797 "num_base_bdevs_operational": 3, 00:13:42.797 "base_bdevs_list": [ 00:13:42.797 { 00:13:42.797 "name": "BaseBdev1", 00:13:42.797 "uuid": "d70fcb6b-559c-4da5-8aaf-60d2649573f9", 00:13:42.797 "is_configured": true, 00:13:42.797 "data_offset": 0, 00:13:42.797 "data_size": 65536 00:13:42.797 }, 00:13:42.797 { 00:13:42.797 "name": "BaseBdev2", 00:13:42.797 "uuid": "c0dca554-7c99-4ef1-a7cf-249bae9e5be3", 00:13:42.797 "is_configured": true, 00:13:42.797 "data_offset": 0, 00:13:42.797 "data_size": 65536 00:13:42.797 }, 00:13:42.797 { 00:13:42.797 "name": "BaseBdev3", 00:13:42.797 "uuid": "ae19879d-2d64-4d27-a9b4-fd3283c6447c", 00:13:42.797 "is_configured": true, 00:13:42.797 "data_offset": 0, 00:13:42.797 "data_size": 65536 00:13:42.797 } 00:13:42.797 ] 00:13:42.797 }' 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.797 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.392 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:43.392 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:43.392 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:43.392 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:43.392 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:43.392 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:43.392 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:43.392 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:43.392 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.392 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.392 [2024-11-06 09:06:19.667313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.392 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.392 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:43.392 "name": "Existed_Raid", 00:13:43.392 "aliases": [ 00:13:43.392 "e564a5a3-010e-4f79-97e1-59128d77b597" 00:13:43.392 ], 00:13:43.392 "product_name": "Raid Volume", 00:13:43.392 "block_size": 512, 00:13:43.392 "num_blocks": 196608, 00:13:43.392 "uuid": "e564a5a3-010e-4f79-97e1-59128d77b597", 00:13:43.392 "assigned_rate_limits": { 00:13:43.392 "rw_ios_per_sec": 0, 00:13:43.392 "rw_mbytes_per_sec": 0, 00:13:43.392 "r_mbytes_per_sec": 0, 00:13:43.392 "w_mbytes_per_sec": 0 00:13:43.392 }, 00:13:43.392 "claimed": false, 00:13:43.392 "zoned": false, 00:13:43.392 "supported_io_types": { 00:13:43.392 "read": true, 00:13:43.392 "write": true, 00:13:43.392 "unmap": true, 00:13:43.392 "flush": true, 00:13:43.392 "reset": true, 00:13:43.392 "nvme_admin": false, 00:13:43.392 "nvme_io": false, 00:13:43.392 "nvme_io_md": false, 00:13:43.392 "write_zeroes": true, 00:13:43.392 "zcopy": false, 00:13:43.392 "get_zone_info": false, 00:13:43.392 "zone_management": false, 00:13:43.392 "zone_append": false, 00:13:43.392 "compare": false, 00:13:43.392 "compare_and_write": false, 00:13:43.392 "abort": false, 00:13:43.392 "seek_hole": false, 00:13:43.392 "seek_data": false, 00:13:43.392 "copy": false, 00:13:43.392 "nvme_iov_md": false 00:13:43.392 }, 00:13:43.392 "memory_domains": [ 00:13:43.392 { 00:13:43.392 "dma_device_id": "system", 00:13:43.392 "dma_device_type": 1 00:13:43.392 }, 00:13:43.392 { 00:13:43.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.392 "dma_device_type": 2 00:13:43.392 }, 00:13:43.392 { 00:13:43.392 "dma_device_id": "system", 00:13:43.392 "dma_device_type": 1 00:13:43.392 }, 00:13:43.392 { 00:13:43.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.392 "dma_device_type": 2 00:13:43.392 }, 00:13:43.392 { 00:13:43.392 "dma_device_id": "system", 00:13:43.392 "dma_device_type": 1 00:13:43.392 }, 00:13:43.392 { 00:13:43.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.392 "dma_device_type": 2 00:13:43.392 } 00:13:43.392 ], 00:13:43.392 "driver_specific": { 00:13:43.392 "raid": { 00:13:43.392 "uuid": "e564a5a3-010e-4f79-97e1-59128d77b597", 00:13:43.393 "strip_size_kb": 64, 00:13:43.393 "state": "online", 00:13:43.393 "raid_level": "raid0", 00:13:43.393 "superblock": false, 00:13:43.393 "num_base_bdevs": 3, 00:13:43.393 "num_base_bdevs_discovered": 3, 00:13:43.393 "num_base_bdevs_operational": 3, 00:13:43.393 "base_bdevs_list": [ 00:13:43.393 { 00:13:43.393 "name": "BaseBdev1", 00:13:43.393 "uuid": "d70fcb6b-559c-4da5-8aaf-60d2649573f9", 00:13:43.393 "is_configured": true, 00:13:43.393 "data_offset": 0, 00:13:43.393 "data_size": 65536 00:13:43.393 }, 00:13:43.393 { 00:13:43.393 "name": "BaseBdev2", 00:13:43.393 "uuid": "c0dca554-7c99-4ef1-a7cf-249bae9e5be3", 00:13:43.393 "is_configured": true, 00:13:43.393 "data_offset": 0, 00:13:43.393 "data_size": 65536 00:13:43.393 }, 00:13:43.393 { 00:13:43.393 "name": "BaseBdev3", 00:13:43.393 "uuid": "ae19879d-2d64-4d27-a9b4-fd3283c6447c", 00:13:43.393 "is_configured": true, 00:13:43.393 "data_offset": 0, 00:13:43.393 "data_size": 65536 00:13:43.393 } 00:13:43.393 ] 00:13:43.393 } 00:13:43.393 } 00:13:43.393 }' 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:43.393 BaseBdev2 00:13:43.393 BaseBdev3' 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:43.393 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.652 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.652 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.652 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.652 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.652 09:06:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:43.652 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.652 09:06:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.652 [2024-11-06 09:06:19.975074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:43.652 [2024-11-06 09:06:19.975109] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.652 [2024-11-06 09:06:19.975179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.652 "name": "Existed_Raid", 00:13:43.652 "uuid": "e564a5a3-010e-4f79-97e1-59128d77b597", 00:13:43.652 "strip_size_kb": 64, 00:13:43.652 "state": "offline", 00:13:43.652 "raid_level": "raid0", 00:13:43.652 "superblock": false, 00:13:43.652 "num_base_bdevs": 3, 00:13:43.652 "num_base_bdevs_discovered": 2, 00:13:43.652 "num_base_bdevs_operational": 2, 00:13:43.652 "base_bdevs_list": [ 00:13:43.652 { 00:13:43.652 "name": null, 00:13:43.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.652 "is_configured": false, 00:13:43.652 "data_offset": 0, 00:13:43.652 "data_size": 65536 00:13:43.652 }, 00:13:43.652 { 00:13:43.652 "name": "BaseBdev2", 00:13:43.652 "uuid": "c0dca554-7c99-4ef1-a7cf-249bae9e5be3", 00:13:43.652 "is_configured": true, 00:13:43.652 "data_offset": 0, 00:13:43.652 "data_size": 65536 00:13:43.652 }, 00:13:43.652 { 00:13:43.652 "name": "BaseBdev3", 00:13:43.652 "uuid": "ae19879d-2d64-4d27-a9b4-fd3283c6447c", 00:13:43.652 "is_configured": true, 00:13:43.652 "data_offset": 0, 00:13:43.652 "data_size": 65536 00:13:43.652 } 00:13:43.652 ] 00:13:43.652 }' 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.652 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.223 [2024-11-06 09:06:20.661463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.223 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.482 [2024-11-06 09:06:20.812377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:44.482 [2024-11-06 09:06:20.812453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.482 09:06:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.482 BaseBdev2 00:13:44.482 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.482 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:44.482 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:44.482 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:44.482 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:44.482 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:44.482 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:44.482 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:44.482 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.482 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.741 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.741 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:44.741 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.741 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.741 [ 00:13:44.741 { 00:13:44.741 "name": "BaseBdev2", 00:13:44.741 "aliases": [ 00:13:44.741 "c1849c44-5eba-46a7-a6b1-119c87c2650c" 00:13:44.741 ], 00:13:44.741 "product_name": "Malloc disk", 00:13:44.741 "block_size": 512, 00:13:44.741 "num_blocks": 65536, 00:13:44.741 "uuid": "c1849c44-5eba-46a7-a6b1-119c87c2650c", 00:13:44.741 "assigned_rate_limits": { 00:13:44.741 "rw_ios_per_sec": 0, 00:13:44.741 "rw_mbytes_per_sec": 0, 00:13:44.741 "r_mbytes_per_sec": 0, 00:13:44.741 "w_mbytes_per_sec": 0 00:13:44.741 }, 00:13:44.741 "claimed": false, 00:13:44.741 "zoned": false, 00:13:44.741 "supported_io_types": { 00:13:44.741 "read": true, 00:13:44.741 "write": true, 00:13:44.741 "unmap": true, 00:13:44.741 "flush": true, 00:13:44.741 "reset": true, 00:13:44.741 "nvme_admin": false, 00:13:44.741 "nvme_io": false, 00:13:44.741 "nvme_io_md": false, 00:13:44.741 "write_zeroes": true, 00:13:44.741 "zcopy": true, 00:13:44.741 "get_zone_info": false, 00:13:44.741 "zone_management": false, 00:13:44.741 "zone_append": false, 00:13:44.741 "compare": false, 00:13:44.741 "compare_and_write": false, 00:13:44.741 "abort": true, 00:13:44.741 "seek_hole": false, 00:13:44.741 "seek_data": false, 00:13:44.741 "copy": true, 00:13:44.741 "nvme_iov_md": false 00:13:44.741 }, 00:13:44.741 "memory_domains": [ 00:13:44.741 { 00:13:44.741 "dma_device_id": "system", 00:13:44.741 "dma_device_type": 1 00:13:44.741 }, 00:13:44.742 { 00:13:44.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.742 "dma_device_type": 2 00:13:44.742 } 00:13:44.742 ], 00:13:44.742 "driver_specific": {} 00:13:44.742 } 00:13:44.742 ] 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.742 BaseBdev3 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.742 [ 00:13:44.742 { 00:13:44.742 "name": "BaseBdev3", 00:13:44.742 "aliases": [ 00:13:44.742 "f84420ea-07f3-4236-9922-cce7c6cf53f4" 00:13:44.742 ], 00:13:44.742 "product_name": "Malloc disk", 00:13:44.742 "block_size": 512, 00:13:44.742 "num_blocks": 65536, 00:13:44.742 "uuid": "f84420ea-07f3-4236-9922-cce7c6cf53f4", 00:13:44.742 "assigned_rate_limits": { 00:13:44.742 "rw_ios_per_sec": 0, 00:13:44.742 "rw_mbytes_per_sec": 0, 00:13:44.742 "r_mbytes_per_sec": 0, 00:13:44.742 "w_mbytes_per_sec": 0 00:13:44.742 }, 00:13:44.742 "claimed": false, 00:13:44.742 "zoned": false, 00:13:44.742 "supported_io_types": { 00:13:44.742 "read": true, 00:13:44.742 "write": true, 00:13:44.742 "unmap": true, 00:13:44.742 "flush": true, 00:13:44.742 "reset": true, 00:13:44.742 "nvme_admin": false, 00:13:44.742 "nvme_io": false, 00:13:44.742 "nvme_io_md": false, 00:13:44.742 "write_zeroes": true, 00:13:44.742 "zcopy": true, 00:13:44.742 "get_zone_info": false, 00:13:44.742 "zone_management": false, 00:13:44.742 "zone_append": false, 00:13:44.742 "compare": false, 00:13:44.742 "compare_and_write": false, 00:13:44.742 "abort": true, 00:13:44.742 "seek_hole": false, 00:13:44.742 "seek_data": false, 00:13:44.742 "copy": true, 00:13:44.742 "nvme_iov_md": false 00:13:44.742 }, 00:13:44.742 "memory_domains": [ 00:13:44.742 { 00:13:44.742 "dma_device_id": "system", 00:13:44.742 "dma_device_type": 1 00:13:44.742 }, 00:13:44.742 { 00:13:44.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.742 "dma_device_type": 2 00:13:44.742 } 00:13:44.742 ], 00:13:44.742 "driver_specific": {} 00:13:44.742 } 00:13:44.742 ] 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.742 [2024-11-06 09:06:21.143351] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:44.742 [2024-11-06 09:06:21.143569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:44.742 [2024-11-06 09:06:21.143618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:44.742 [2024-11-06 09:06:21.145995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.742 "name": "Existed_Raid", 00:13:44.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.742 "strip_size_kb": 64, 00:13:44.742 "state": "configuring", 00:13:44.742 "raid_level": "raid0", 00:13:44.742 "superblock": false, 00:13:44.742 "num_base_bdevs": 3, 00:13:44.742 "num_base_bdevs_discovered": 2, 00:13:44.742 "num_base_bdevs_operational": 3, 00:13:44.742 "base_bdevs_list": [ 00:13:44.742 { 00:13:44.742 "name": "BaseBdev1", 00:13:44.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.742 "is_configured": false, 00:13:44.742 "data_offset": 0, 00:13:44.742 "data_size": 0 00:13:44.742 }, 00:13:44.742 { 00:13:44.742 "name": "BaseBdev2", 00:13:44.742 "uuid": "c1849c44-5eba-46a7-a6b1-119c87c2650c", 00:13:44.742 "is_configured": true, 00:13:44.742 "data_offset": 0, 00:13:44.742 "data_size": 65536 00:13:44.742 }, 00:13:44.742 { 00:13:44.742 "name": "BaseBdev3", 00:13:44.742 "uuid": "f84420ea-07f3-4236-9922-cce7c6cf53f4", 00:13:44.742 "is_configured": true, 00:13:44.742 "data_offset": 0, 00:13:44.742 "data_size": 65536 00:13:44.742 } 00:13:44.742 ] 00:13:44.742 }' 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.742 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.309 [2024-11-06 09:06:21.671521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.309 "name": "Existed_Raid", 00:13:45.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.309 "strip_size_kb": 64, 00:13:45.309 "state": "configuring", 00:13:45.309 "raid_level": "raid0", 00:13:45.309 "superblock": false, 00:13:45.309 "num_base_bdevs": 3, 00:13:45.309 "num_base_bdevs_discovered": 1, 00:13:45.309 "num_base_bdevs_operational": 3, 00:13:45.309 "base_bdevs_list": [ 00:13:45.309 { 00:13:45.309 "name": "BaseBdev1", 00:13:45.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.309 "is_configured": false, 00:13:45.309 "data_offset": 0, 00:13:45.309 "data_size": 0 00:13:45.309 }, 00:13:45.309 { 00:13:45.309 "name": null, 00:13:45.309 "uuid": "c1849c44-5eba-46a7-a6b1-119c87c2650c", 00:13:45.309 "is_configured": false, 00:13:45.309 "data_offset": 0, 00:13:45.309 "data_size": 65536 00:13:45.309 }, 00:13:45.309 { 00:13:45.309 "name": "BaseBdev3", 00:13:45.309 "uuid": "f84420ea-07f3-4236-9922-cce7c6cf53f4", 00:13:45.309 "is_configured": true, 00:13:45.309 "data_offset": 0, 00:13:45.309 "data_size": 65536 00:13:45.309 } 00:13:45.309 ] 00:13:45.309 }' 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.309 09:06:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.877 [2024-11-06 09:06:22.307552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.877 BaseBdev1 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.877 [ 00:13:45.877 { 00:13:45.877 "name": "BaseBdev1", 00:13:45.877 "aliases": [ 00:13:45.877 "e1b97f48-a40e-4e75-891f-b061cb7c7ff7" 00:13:45.877 ], 00:13:45.877 "product_name": "Malloc disk", 00:13:45.877 "block_size": 512, 00:13:45.877 "num_blocks": 65536, 00:13:45.877 "uuid": "e1b97f48-a40e-4e75-891f-b061cb7c7ff7", 00:13:45.877 "assigned_rate_limits": { 00:13:45.877 "rw_ios_per_sec": 0, 00:13:45.877 "rw_mbytes_per_sec": 0, 00:13:45.877 "r_mbytes_per_sec": 0, 00:13:45.877 "w_mbytes_per_sec": 0 00:13:45.877 }, 00:13:45.877 "claimed": true, 00:13:45.877 "claim_type": "exclusive_write", 00:13:45.877 "zoned": false, 00:13:45.877 "supported_io_types": { 00:13:45.877 "read": true, 00:13:45.877 "write": true, 00:13:45.877 "unmap": true, 00:13:45.877 "flush": true, 00:13:45.877 "reset": true, 00:13:45.877 "nvme_admin": false, 00:13:45.877 "nvme_io": false, 00:13:45.877 "nvme_io_md": false, 00:13:45.877 "write_zeroes": true, 00:13:45.877 "zcopy": true, 00:13:45.877 "get_zone_info": false, 00:13:45.877 "zone_management": false, 00:13:45.877 "zone_append": false, 00:13:45.877 "compare": false, 00:13:45.877 "compare_and_write": false, 00:13:45.877 "abort": true, 00:13:45.877 "seek_hole": false, 00:13:45.877 "seek_data": false, 00:13:45.877 "copy": true, 00:13:45.877 "nvme_iov_md": false 00:13:45.877 }, 00:13:45.877 "memory_domains": [ 00:13:45.877 { 00:13:45.877 "dma_device_id": "system", 00:13:45.877 "dma_device_type": 1 00:13:45.877 }, 00:13:45.877 { 00:13:45.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.877 "dma_device_type": 2 00:13:45.877 } 00:13:45.877 ], 00:13:45.877 "driver_specific": {} 00:13:45.877 } 00:13:45.877 ] 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.877 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.877 "name": "Existed_Raid", 00:13:45.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.877 "strip_size_kb": 64, 00:13:45.877 "state": "configuring", 00:13:45.878 "raid_level": "raid0", 00:13:45.878 "superblock": false, 00:13:45.878 "num_base_bdevs": 3, 00:13:45.878 "num_base_bdevs_discovered": 2, 00:13:45.878 "num_base_bdevs_operational": 3, 00:13:45.878 "base_bdevs_list": [ 00:13:45.878 { 00:13:45.878 "name": "BaseBdev1", 00:13:45.878 "uuid": "e1b97f48-a40e-4e75-891f-b061cb7c7ff7", 00:13:45.878 "is_configured": true, 00:13:45.878 "data_offset": 0, 00:13:45.878 "data_size": 65536 00:13:45.878 }, 00:13:45.878 { 00:13:45.878 "name": null, 00:13:45.878 "uuid": "c1849c44-5eba-46a7-a6b1-119c87c2650c", 00:13:45.878 "is_configured": false, 00:13:45.878 "data_offset": 0, 00:13:45.878 "data_size": 65536 00:13:45.878 }, 00:13:45.878 { 00:13:45.878 "name": "BaseBdev3", 00:13:45.878 "uuid": "f84420ea-07f3-4236-9922-cce7c6cf53f4", 00:13:45.878 "is_configured": true, 00:13:45.878 "data_offset": 0, 00:13:45.878 "data_size": 65536 00:13:45.878 } 00:13:45.878 ] 00:13:45.878 }' 00:13:45.878 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.878 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.445 [2024-11-06 09:06:22.931816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.445 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.703 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.703 "name": "Existed_Raid", 00:13:46.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.703 "strip_size_kb": 64, 00:13:46.703 "state": "configuring", 00:13:46.703 "raid_level": "raid0", 00:13:46.703 "superblock": false, 00:13:46.703 "num_base_bdevs": 3, 00:13:46.703 "num_base_bdevs_discovered": 1, 00:13:46.703 "num_base_bdevs_operational": 3, 00:13:46.703 "base_bdevs_list": [ 00:13:46.703 { 00:13:46.703 "name": "BaseBdev1", 00:13:46.703 "uuid": "e1b97f48-a40e-4e75-891f-b061cb7c7ff7", 00:13:46.703 "is_configured": true, 00:13:46.703 "data_offset": 0, 00:13:46.703 "data_size": 65536 00:13:46.703 }, 00:13:46.703 { 00:13:46.703 "name": null, 00:13:46.703 "uuid": "c1849c44-5eba-46a7-a6b1-119c87c2650c", 00:13:46.703 "is_configured": false, 00:13:46.703 "data_offset": 0, 00:13:46.703 "data_size": 65536 00:13:46.703 }, 00:13:46.703 { 00:13:46.703 "name": null, 00:13:46.703 "uuid": "f84420ea-07f3-4236-9922-cce7c6cf53f4", 00:13:46.703 "is_configured": false, 00:13:46.703 "data_offset": 0, 00:13:46.703 "data_size": 65536 00:13:46.703 } 00:13:46.703 ] 00:13:46.703 }' 00:13:46.703 09:06:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.703 09:06:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.962 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:46.962 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.962 09:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.962 09:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.962 09:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.220 [2024-11-06 09:06:23.508113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.220 "name": "Existed_Raid", 00:13:47.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.220 "strip_size_kb": 64, 00:13:47.220 "state": "configuring", 00:13:47.220 "raid_level": "raid0", 00:13:47.220 "superblock": false, 00:13:47.220 "num_base_bdevs": 3, 00:13:47.220 "num_base_bdevs_discovered": 2, 00:13:47.220 "num_base_bdevs_operational": 3, 00:13:47.220 "base_bdevs_list": [ 00:13:47.220 { 00:13:47.220 "name": "BaseBdev1", 00:13:47.220 "uuid": "e1b97f48-a40e-4e75-891f-b061cb7c7ff7", 00:13:47.220 "is_configured": true, 00:13:47.220 "data_offset": 0, 00:13:47.220 "data_size": 65536 00:13:47.220 }, 00:13:47.220 { 00:13:47.220 "name": null, 00:13:47.220 "uuid": "c1849c44-5eba-46a7-a6b1-119c87c2650c", 00:13:47.220 "is_configured": false, 00:13:47.220 "data_offset": 0, 00:13:47.220 "data_size": 65536 00:13:47.220 }, 00:13:47.220 { 00:13:47.220 "name": "BaseBdev3", 00:13:47.220 "uuid": "f84420ea-07f3-4236-9922-cce7c6cf53f4", 00:13:47.220 "is_configured": true, 00:13:47.220 "data_offset": 0, 00:13:47.220 "data_size": 65536 00:13:47.220 } 00:13:47.220 ] 00:13:47.220 }' 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.220 09:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.788 [2024-11-06 09:06:24.096311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.788 "name": "Existed_Raid", 00:13:47.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.788 "strip_size_kb": 64, 00:13:47.788 "state": "configuring", 00:13:47.788 "raid_level": "raid0", 00:13:47.788 "superblock": false, 00:13:47.788 "num_base_bdevs": 3, 00:13:47.788 "num_base_bdevs_discovered": 1, 00:13:47.788 "num_base_bdevs_operational": 3, 00:13:47.788 "base_bdevs_list": [ 00:13:47.788 { 00:13:47.788 "name": null, 00:13:47.788 "uuid": "e1b97f48-a40e-4e75-891f-b061cb7c7ff7", 00:13:47.788 "is_configured": false, 00:13:47.788 "data_offset": 0, 00:13:47.788 "data_size": 65536 00:13:47.788 }, 00:13:47.788 { 00:13:47.788 "name": null, 00:13:47.788 "uuid": "c1849c44-5eba-46a7-a6b1-119c87c2650c", 00:13:47.788 "is_configured": false, 00:13:47.788 "data_offset": 0, 00:13:47.788 "data_size": 65536 00:13:47.788 }, 00:13:47.788 { 00:13:47.788 "name": "BaseBdev3", 00:13:47.788 "uuid": "f84420ea-07f3-4236-9922-cce7c6cf53f4", 00:13:47.788 "is_configured": true, 00:13:47.788 "data_offset": 0, 00:13:47.788 "data_size": 65536 00:13:47.788 } 00:13:47.788 ] 00:13:47.788 }' 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.788 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.355 [2024-11-06 09:06:24.757876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.355 "name": "Existed_Raid", 00:13:48.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.355 "strip_size_kb": 64, 00:13:48.355 "state": "configuring", 00:13:48.355 "raid_level": "raid0", 00:13:48.355 "superblock": false, 00:13:48.355 "num_base_bdevs": 3, 00:13:48.355 "num_base_bdevs_discovered": 2, 00:13:48.355 "num_base_bdevs_operational": 3, 00:13:48.355 "base_bdevs_list": [ 00:13:48.355 { 00:13:48.355 "name": null, 00:13:48.355 "uuid": "e1b97f48-a40e-4e75-891f-b061cb7c7ff7", 00:13:48.355 "is_configured": false, 00:13:48.355 "data_offset": 0, 00:13:48.355 "data_size": 65536 00:13:48.355 }, 00:13:48.355 { 00:13:48.355 "name": "BaseBdev2", 00:13:48.355 "uuid": "c1849c44-5eba-46a7-a6b1-119c87c2650c", 00:13:48.355 "is_configured": true, 00:13:48.355 "data_offset": 0, 00:13:48.355 "data_size": 65536 00:13:48.355 }, 00:13:48.355 { 00:13:48.355 "name": "BaseBdev3", 00:13:48.355 "uuid": "f84420ea-07f3-4236-9922-cce7c6cf53f4", 00:13:48.355 "is_configured": true, 00:13:48.355 "data_offset": 0, 00:13:48.355 "data_size": 65536 00:13:48.355 } 00:13:48.355 ] 00:13:48.355 }' 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.355 09:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e1b97f48-a40e-4e75-891f-b061cb7c7ff7 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 [2024-11-06 09:06:25.430570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:48.922 [2024-11-06 09:06:25.430619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:48.922 [2024-11-06 09:06:25.430634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:48.922 [2024-11-06 09:06:25.430952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:48.922 [2024-11-06 09:06:25.431162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:48.922 [2024-11-06 09:06:25.431178] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:48.922 NewBaseBdev 00:13:48.922 [2024-11-06 09:06:25.431480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.922 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.922 [ 00:13:48.922 { 00:13:48.922 "name": "NewBaseBdev", 00:13:48.922 "aliases": [ 00:13:49.181 "e1b97f48-a40e-4e75-891f-b061cb7c7ff7" 00:13:49.181 ], 00:13:49.181 "product_name": "Malloc disk", 00:13:49.181 "block_size": 512, 00:13:49.181 "num_blocks": 65536, 00:13:49.181 "uuid": "e1b97f48-a40e-4e75-891f-b061cb7c7ff7", 00:13:49.181 "assigned_rate_limits": { 00:13:49.181 "rw_ios_per_sec": 0, 00:13:49.181 "rw_mbytes_per_sec": 0, 00:13:49.181 "r_mbytes_per_sec": 0, 00:13:49.181 "w_mbytes_per_sec": 0 00:13:49.181 }, 00:13:49.181 "claimed": true, 00:13:49.181 "claim_type": "exclusive_write", 00:13:49.181 "zoned": false, 00:13:49.181 "supported_io_types": { 00:13:49.181 "read": true, 00:13:49.181 "write": true, 00:13:49.181 "unmap": true, 00:13:49.181 "flush": true, 00:13:49.181 "reset": true, 00:13:49.181 "nvme_admin": false, 00:13:49.181 "nvme_io": false, 00:13:49.181 "nvme_io_md": false, 00:13:49.181 "write_zeroes": true, 00:13:49.181 "zcopy": true, 00:13:49.181 "get_zone_info": false, 00:13:49.181 "zone_management": false, 00:13:49.181 "zone_append": false, 00:13:49.181 "compare": false, 00:13:49.181 "compare_and_write": false, 00:13:49.181 "abort": true, 00:13:49.181 "seek_hole": false, 00:13:49.181 "seek_data": false, 00:13:49.181 "copy": true, 00:13:49.181 "nvme_iov_md": false 00:13:49.181 }, 00:13:49.181 "memory_domains": [ 00:13:49.181 { 00:13:49.181 "dma_device_id": "system", 00:13:49.181 "dma_device_type": 1 00:13:49.181 }, 00:13:49.181 { 00:13:49.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.181 "dma_device_type": 2 00:13:49.181 } 00:13:49.181 ], 00:13:49.181 "driver_specific": {} 00:13:49.181 } 00:13:49.181 ] 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.181 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.182 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.182 "name": "Existed_Raid", 00:13:49.182 "uuid": "633c80d6-e3eb-41a5-8f5a-aaea85637da1", 00:13:49.182 "strip_size_kb": 64, 00:13:49.182 "state": "online", 00:13:49.182 "raid_level": "raid0", 00:13:49.182 "superblock": false, 00:13:49.182 "num_base_bdevs": 3, 00:13:49.182 "num_base_bdevs_discovered": 3, 00:13:49.182 "num_base_bdevs_operational": 3, 00:13:49.182 "base_bdevs_list": [ 00:13:49.182 { 00:13:49.182 "name": "NewBaseBdev", 00:13:49.182 "uuid": "e1b97f48-a40e-4e75-891f-b061cb7c7ff7", 00:13:49.182 "is_configured": true, 00:13:49.182 "data_offset": 0, 00:13:49.182 "data_size": 65536 00:13:49.182 }, 00:13:49.182 { 00:13:49.182 "name": "BaseBdev2", 00:13:49.182 "uuid": "c1849c44-5eba-46a7-a6b1-119c87c2650c", 00:13:49.182 "is_configured": true, 00:13:49.182 "data_offset": 0, 00:13:49.182 "data_size": 65536 00:13:49.182 }, 00:13:49.182 { 00:13:49.182 "name": "BaseBdev3", 00:13:49.182 "uuid": "f84420ea-07f3-4236-9922-cce7c6cf53f4", 00:13:49.182 "is_configured": true, 00:13:49.182 "data_offset": 0, 00:13:49.182 "data_size": 65536 00:13:49.182 } 00:13:49.182 ] 00:13:49.182 }' 00:13:49.182 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.182 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.749 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:49.749 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:49.749 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:49.749 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:49.749 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:49.749 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:49.749 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:49.749 09:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:49.749 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.749 09:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.749 [2024-11-06 09:06:25.999255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.749 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.749 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:49.749 "name": "Existed_Raid", 00:13:49.749 "aliases": [ 00:13:49.749 "633c80d6-e3eb-41a5-8f5a-aaea85637da1" 00:13:49.749 ], 00:13:49.749 "product_name": "Raid Volume", 00:13:49.749 "block_size": 512, 00:13:49.749 "num_blocks": 196608, 00:13:49.749 "uuid": "633c80d6-e3eb-41a5-8f5a-aaea85637da1", 00:13:49.749 "assigned_rate_limits": { 00:13:49.749 "rw_ios_per_sec": 0, 00:13:49.749 "rw_mbytes_per_sec": 0, 00:13:49.749 "r_mbytes_per_sec": 0, 00:13:49.749 "w_mbytes_per_sec": 0 00:13:49.749 }, 00:13:49.749 "claimed": false, 00:13:49.749 "zoned": false, 00:13:49.749 "supported_io_types": { 00:13:49.749 "read": true, 00:13:49.749 "write": true, 00:13:49.749 "unmap": true, 00:13:49.749 "flush": true, 00:13:49.749 "reset": true, 00:13:49.749 "nvme_admin": false, 00:13:49.749 "nvme_io": false, 00:13:49.749 "nvme_io_md": false, 00:13:49.749 "write_zeroes": true, 00:13:49.749 "zcopy": false, 00:13:49.749 "get_zone_info": false, 00:13:49.749 "zone_management": false, 00:13:49.749 "zone_append": false, 00:13:49.749 "compare": false, 00:13:49.749 "compare_and_write": false, 00:13:49.749 "abort": false, 00:13:49.749 "seek_hole": false, 00:13:49.749 "seek_data": false, 00:13:49.749 "copy": false, 00:13:49.749 "nvme_iov_md": false 00:13:49.749 }, 00:13:49.749 "memory_domains": [ 00:13:49.749 { 00:13:49.749 "dma_device_id": "system", 00:13:49.749 "dma_device_type": 1 00:13:49.749 }, 00:13:49.749 { 00:13:49.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.749 "dma_device_type": 2 00:13:49.749 }, 00:13:49.749 { 00:13:49.749 "dma_device_id": "system", 00:13:49.749 "dma_device_type": 1 00:13:49.749 }, 00:13:49.749 { 00:13:49.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.749 "dma_device_type": 2 00:13:49.749 }, 00:13:49.749 { 00:13:49.749 "dma_device_id": "system", 00:13:49.749 "dma_device_type": 1 00:13:49.749 }, 00:13:49.749 { 00:13:49.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.749 "dma_device_type": 2 00:13:49.749 } 00:13:49.749 ], 00:13:49.749 "driver_specific": { 00:13:49.749 "raid": { 00:13:49.749 "uuid": "633c80d6-e3eb-41a5-8f5a-aaea85637da1", 00:13:49.749 "strip_size_kb": 64, 00:13:49.749 "state": "online", 00:13:49.749 "raid_level": "raid0", 00:13:49.749 "superblock": false, 00:13:49.749 "num_base_bdevs": 3, 00:13:49.749 "num_base_bdevs_discovered": 3, 00:13:49.749 "num_base_bdevs_operational": 3, 00:13:49.749 "base_bdevs_list": [ 00:13:49.749 { 00:13:49.749 "name": "NewBaseBdev", 00:13:49.749 "uuid": "e1b97f48-a40e-4e75-891f-b061cb7c7ff7", 00:13:49.749 "is_configured": true, 00:13:49.749 "data_offset": 0, 00:13:49.749 "data_size": 65536 00:13:49.749 }, 00:13:49.749 { 00:13:49.749 "name": "BaseBdev2", 00:13:49.749 "uuid": "c1849c44-5eba-46a7-a6b1-119c87c2650c", 00:13:49.749 "is_configured": true, 00:13:49.749 "data_offset": 0, 00:13:49.749 "data_size": 65536 00:13:49.749 }, 00:13:49.749 { 00:13:49.749 "name": "BaseBdev3", 00:13:49.749 "uuid": "f84420ea-07f3-4236-9922-cce7c6cf53f4", 00:13:49.749 "is_configured": true, 00:13:49.749 "data_offset": 0, 00:13:49.749 "data_size": 65536 00:13:49.749 } 00:13:49.749 ] 00:13:49.749 } 00:13:49.749 } 00:13:49.749 }' 00:13:49.749 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:49.749 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:49.749 BaseBdev2 00:13:49.749 BaseBdev3' 00:13:49.749 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.749 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:49.749 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.749 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:49.749 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.749 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.750 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:50.008 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.008 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:50.008 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:50.008 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:50.008 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.008 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.008 [2024-11-06 09:06:26.338965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.008 [2024-11-06 09:06:26.339129] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.008 [2024-11-06 09:06:26.339317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.008 [2024-11-06 09:06:26.339489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.008 [2024-11-06 09:06:26.339621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:50.008 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.009 09:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63726 00:13:50.009 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 63726 ']' 00:13:50.009 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 63726 00:13:50.009 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:13:50.009 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:50.009 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63726 00:13:50.009 killing process with pid 63726 00:13:50.009 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:50.009 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:50.009 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63726' 00:13:50.009 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 63726 00:13:50.009 [2024-11-06 09:06:26.383714] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.009 09:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 63726 00:13:50.267 [2024-11-06 09:06:26.651111] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.206 ************************************ 00:13:51.206 END TEST raid_state_function_test 00:13:51.206 ************************************ 00:13:51.206 09:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:51.206 00:13:51.206 real 0m12.017s 00:13:51.206 user 0m19.965s 00:13:51.206 sys 0m1.609s 00:13:51.206 09:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:51.206 09:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.206 09:06:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:13:51.206 09:06:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:51.206 09:06:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:51.206 09:06:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.206 ************************************ 00:13:51.206 START TEST raid_state_function_test_sb 00:13:51.206 ************************************ 00:13:51.206 09:06:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:13:51.206 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:51.463 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:51.463 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:51.463 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:51.463 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:51.463 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.463 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:51.463 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.463 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.463 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:51.464 Process raid pid: 64364 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64364 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64364' 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64364 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 64364 ']' 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:51.464 09:06:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.464 [2024-11-06 09:06:27.857573] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:13:51.464 [2024-11-06 09:06:27.858023] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.722 [2024-11-06 09:06:28.047742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.722 [2024-11-06 09:06:28.176269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.980 [2024-11-06 09:06:28.381618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.980 [2024-11-06 09:06:28.381842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.548 [2024-11-06 09:06:28.856189] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.548 [2024-11-06 09:06:28.856386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.548 [2024-11-06 09:06:28.856518] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.548 [2024-11-06 09:06:28.856584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.548 [2024-11-06 09:06:28.856802] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.548 [2024-11-06 09:06:28.856889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.548 "name": "Existed_Raid", 00:13:52.548 "uuid": "16a012a3-fad8-4acc-a017-ec4003286fc6", 00:13:52.548 "strip_size_kb": 64, 00:13:52.548 "state": "configuring", 00:13:52.548 "raid_level": "raid0", 00:13:52.548 "superblock": true, 00:13:52.548 "num_base_bdevs": 3, 00:13:52.548 "num_base_bdevs_discovered": 0, 00:13:52.548 "num_base_bdevs_operational": 3, 00:13:52.548 "base_bdevs_list": [ 00:13:52.548 { 00:13:52.548 "name": "BaseBdev1", 00:13:52.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.548 "is_configured": false, 00:13:52.548 "data_offset": 0, 00:13:52.548 "data_size": 0 00:13:52.548 }, 00:13:52.548 { 00:13:52.548 "name": "BaseBdev2", 00:13:52.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.548 "is_configured": false, 00:13:52.548 "data_offset": 0, 00:13:52.548 "data_size": 0 00:13:52.548 }, 00:13:52.548 { 00:13:52.548 "name": "BaseBdev3", 00:13:52.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.548 "is_configured": false, 00:13:52.548 "data_offset": 0, 00:13:52.548 "data_size": 0 00:13:52.548 } 00:13:52.548 ] 00:13:52.548 }' 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.548 09:06:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.118 [2024-11-06 09:06:29.380374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.118 [2024-11-06 09:06:29.380631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.118 [2024-11-06 09:06:29.388331] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:53.118 [2024-11-06 09:06:29.388563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:53.118 [2024-11-06 09:06:29.388681] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.118 [2024-11-06 09:06:29.388806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.118 [2024-11-06 09:06:29.388948] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:53.118 [2024-11-06 09:06:29.389014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.118 [2024-11-06 09:06:29.434338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.118 BaseBdev1 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:53.118 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.119 [ 00:13:53.119 { 00:13:53.119 "name": "BaseBdev1", 00:13:53.119 "aliases": [ 00:13:53.119 "8afcf448-8f99-4280-86c6-141be938c24b" 00:13:53.119 ], 00:13:53.119 "product_name": "Malloc disk", 00:13:53.119 "block_size": 512, 00:13:53.119 "num_blocks": 65536, 00:13:53.119 "uuid": "8afcf448-8f99-4280-86c6-141be938c24b", 00:13:53.119 "assigned_rate_limits": { 00:13:53.119 "rw_ios_per_sec": 0, 00:13:53.119 "rw_mbytes_per_sec": 0, 00:13:53.119 "r_mbytes_per_sec": 0, 00:13:53.119 "w_mbytes_per_sec": 0 00:13:53.119 }, 00:13:53.119 "claimed": true, 00:13:53.119 "claim_type": "exclusive_write", 00:13:53.119 "zoned": false, 00:13:53.119 "supported_io_types": { 00:13:53.119 "read": true, 00:13:53.119 "write": true, 00:13:53.119 "unmap": true, 00:13:53.119 "flush": true, 00:13:53.119 "reset": true, 00:13:53.119 "nvme_admin": false, 00:13:53.119 "nvme_io": false, 00:13:53.119 "nvme_io_md": false, 00:13:53.119 "write_zeroes": true, 00:13:53.119 "zcopy": true, 00:13:53.119 "get_zone_info": false, 00:13:53.119 "zone_management": false, 00:13:53.119 "zone_append": false, 00:13:53.119 "compare": false, 00:13:53.119 "compare_and_write": false, 00:13:53.119 "abort": true, 00:13:53.119 "seek_hole": false, 00:13:53.119 "seek_data": false, 00:13:53.119 "copy": true, 00:13:53.119 "nvme_iov_md": false 00:13:53.119 }, 00:13:53.119 "memory_domains": [ 00:13:53.119 { 00:13:53.119 "dma_device_id": "system", 00:13:53.119 "dma_device_type": 1 00:13:53.119 }, 00:13:53.119 { 00:13:53.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.119 "dma_device_type": 2 00:13:53.119 } 00:13:53.119 ], 00:13:53.119 "driver_specific": {} 00:13:53.119 } 00:13:53.119 ] 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.119 "name": "Existed_Raid", 00:13:53.119 "uuid": "97b0e884-6de8-4032-8bf2-2926f12dd7eb", 00:13:53.119 "strip_size_kb": 64, 00:13:53.119 "state": "configuring", 00:13:53.119 "raid_level": "raid0", 00:13:53.119 "superblock": true, 00:13:53.119 "num_base_bdevs": 3, 00:13:53.119 "num_base_bdevs_discovered": 1, 00:13:53.119 "num_base_bdevs_operational": 3, 00:13:53.119 "base_bdevs_list": [ 00:13:53.119 { 00:13:53.119 "name": "BaseBdev1", 00:13:53.119 "uuid": "8afcf448-8f99-4280-86c6-141be938c24b", 00:13:53.119 "is_configured": true, 00:13:53.119 "data_offset": 2048, 00:13:53.119 "data_size": 63488 00:13:53.119 }, 00:13:53.119 { 00:13:53.119 "name": "BaseBdev2", 00:13:53.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.119 "is_configured": false, 00:13:53.119 "data_offset": 0, 00:13:53.119 "data_size": 0 00:13:53.119 }, 00:13:53.119 { 00:13:53.119 "name": "BaseBdev3", 00:13:53.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.119 "is_configured": false, 00:13:53.119 "data_offset": 0, 00:13:53.119 "data_size": 0 00:13:53.119 } 00:13:53.119 ] 00:13:53.119 }' 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.119 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.684 [2024-11-06 09:06:29.986544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:53.684 [2024-11-06 09:06:29.986604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.684 [2024-11-06 09:06:29.994583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:53.684 [2024-11-06 09:06:29.997202] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:53.684 [2024-11-06 09:06:29.997395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:53.684 [2024-11-06 09:06:29.997514] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:53.684 [2024-11-06 09:06:29.997654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.684 09:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.684 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.684 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.684 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.684 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.684 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.684 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.684 "name": "Existed_Raid", 00:13:53.684 "uuid": "8f118a23-cb7e-4e68-86f0-321b6bc03ce3", 00:13:53.684 "strip_size_kb": 64, 00:13:53.684 "state": "configuring", 00:13:53.684 "raid_level": "raid0", 00:13:53.684 "superblock": true, 00:13:53.684 "num_base_bdevs": 3, 00:13:53.684 "num_base_bdevs_discovered": 1, 00:13:53.684 "num_base_bdevs_operational": 3, 00:13:53.684 "base_bdevs_list": [ 00:13:53.684 { 00:13:53.684 "name": "BaseBdev1", 00:13:53.684 "uuid": "8afcf448-8f99-4280-86c6-141be938c24b", 00:13:53.684 "is_configured": true, 00:13:53.684 "data_offset": 2048, 00:13:53.684 "data_size": 63488 00:13:53.684 }, 00:13:53.684 { 00:13:53.684 "name": "BaseBdev2", 00:13:53.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.684 "is_configured": false, 00:13:53.684 "data_offset": 0, 00:13:53.684 "data_size": 0 00:13:53.684 }, 00:13:53.684 { 00:13:53.684 "name": "BaseBdev3", 00:13:53.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.684 "is_configured": false, 00:13:53.684 "data_offset": 0, 00:13:53.684 "data_size": 0 00:13:53.684 } 00:13:53.684 ] 00:13:53.684 }' 00:13:53.684 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.684 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.251 [2024-11-06 09:06:30.580600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.251 BaseBdev2 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.251 [ 00:13:54.251 { 00:13:54.251 "name": "BaseBdev2", 00:13:54.251 "aliases": [ 00:13:54.251 "08cf0073-4aa6-495b-b548-3075b1c3f566" 00:13:54.251 ], 00:13:54.251 "product_name": "Malloc disk", 00:13:54.251 "block_size": 512, 00:13:54.251 "num_blocks": 65536, 00:13:54.251 "uuid": "08cf0073-4aa6-495b-b548-3075b1c3f566", 00:13:54.251 "assigned_rate_limits": { 00:13:54.251 "rw_ios_per_sec": 0, 00:13:54.251 "rw_mbytes_per_sec": 0, 00:13:54.251 "r_mbytes_per_sec": 0, 00:13:54.251 "w_mbytes_per_sec": 0 00:13:54.251 }, 00:13:54.251 "claimed": true, 00:13:54.251 "claim_type": "exclusive_write", 00:13:54.251 "zoned": false, 00:13:54.251 "supported_io_types": { 00:13:54.251 "read": true, 00:13:54.251 "write": true, 00:13:54.251 "unmap": true, 00:13:54.251 "flush": true, 00:13:54.251 "reset": true, 00:13:54.251 "nvme_admin": false, 00:13:54.251 "nvme_io": false, 00:13:54.251 "nvme_io_md": false, 00:13:54.251 "write_zeroes": true, 00:13:54.251 "zcopy": true, 00:13:54.251 "get_zone_info": false, 00:13:54.251 "zone_management": false, 00:13:54.251 "zone_append": false, 00:13:54.251 "compare": false, 00:13:54.251 "compare_and_write": false, 00:13:54.251 "abort": true, 00:13:54.251 "seek_hole": false, 00:13:54.251 "seek_data": false, 00:13:54.251 "copy": true, 00:13:54.251 "nvme_iov_md": false 00:13:54.251 }, 00:13:54.251 "memory_domains": [ 00:13:54.251 { 00:13:54.251 "dma_device_id": "system", 00:13:54.251 "dma_device_type": 1 00:13:54.251 }, 00:13:54.251 { 00:13:54.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.251 "dma_device_type": 2 00:13:54.251 } 00:13:54.251 ], 00:13:54.251 "driver_specific": {} 00:13:54.251 } 00:13:54.251 ] 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.251 "name": "Existed_Raid", 00:13:54.251 "uuid": "8f118a23-cb7e-4e68-86f0-321b6bc03ce3", 00:13:54.251 "strip_size_kb": 64, 00:13:54.251 "state": "configuring", 00:13:54.251 "raid_level": "raid0", 00:13:54.251 "superblock": true, 00:13:54.251 "num_base_bdevs": 3, 00:13:54.251 "num_base_bdevs_discovered": 2, 00:13:54.251 "num_base_bdevs_operational": 3, 00:13:54.251 "base_bdevs_list": [ 00:13:54.251 { 00:13:54.251 "name": "BaseBdev1", 00:13:54.251 "uuid": "8afcf448-8f99-4280-86c6-141be938c24b", 00:13:54.251 "is_configured": true, 00:13:54.251 "data_offset": 2048, 00:13:54.251 "data_size": 63488 00:13:54.251 }, 00:13:54.251 { 00:13:54.251 "name": "BaseBdev2", 00:13:54.251 "uuid": "08cf0073-4aa6-495b-b548-3075b1c3f566", 00:13:54.251 "is_configured": true, 00:13:54.251 "data_offset": 2048, 00:13:54.251 "data_size": 63488 00:13:54.251 }, 00:13:54.251 { 00:13:54.251 "name": "BaseBdev3", 00:13:54.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.251 "is_configured": false, 00:13:54.251 "data_offset": 0, 00:13:54.251 "data_size": 0 00:13:54.251 } 00:13:54.251 ] 00:13:54.251 }' 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.251 09:06:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.818 [2024-11-06 09:06:31.184258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.818 [2024-11-06 09:06:31.184678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:54.818 [2024-11-06 09:06:31.184710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:54.818 BaseBdev3 00:13:54.818 [2024-11-06 09:06:31.185059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:54.818 [2024-11-06 09:06:31.185272] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:54.818 [2024-11-06 09:06:31.185288] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:54.818 [2024-11-06 09:06:31.185466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.818 [ 00:13:54.818 { 00:13:54.818 "name": "BaseBdev3", 00:13:54.818 "aliases": [ 00:13:54.818 "615e1aa2-0c1a-42ee-83eb-8900a0689a2d" 00:13:54.818 ], 00:13:54.818 "product_name": "Malloc disk", 00:13:54.818 "block_size": 512, 00:13:54.818 "num_blocks": 65536, 00:13:54.818 "uuid": "615e1aa2-0c1a-42ee-83eb-8900a0689a2d", 00:13:54.818 "assigned_rate_limits": { 00:13:54.818 "rw_ios_per_sec": 0, 00:13:54.818 "rw_mbytes_per_sec": 0, 00:13:54.818 "r_mbytes_per_sec": 0, 00:13:54.818 "w_mbytes_per_sec": 0 00:13:54.818 }, 00:13:54.818 "claimed": true, 00:13:54.818 "claim_type": "exclusive_write", 00:13:54.818 "zoned": false, 00:13:54.818 "supported_io_types": { 00:13:54.818 "read": true, 00:13:54.818 "write": true, 00:13:54.818 "unmap": true, 00:13:54.818 "flush": true, 00:13:54.818 "reset": true, 00:13:54.818 "nvme_admin": false, 00:13:54.818 "nvme_io": false, 00:13:54.818 "nvme_io_md": false, 00:13:54.818 "write_zeroes": true, 00:13:54.818 "zcopy": true, 00:13:54.818 "get_zone_info": false, 00:13:54.818 "zone_management": false, 00:13:54.818 "zone_append": false, 00:13:54.818 "compare": false, 00:13:54.818 "compare_and_write": false, 00:13:54.818 "abort": true, 00:13:54.818 "seek_hole": false, 00:13:54.818 "seek_data": false, 00:13:54.818 "copy": true, 00:13:54.818 "nvme_iov_md": false 00:13:54.818 }, 00:13:54.818 "memory_domains": [ 00:13:54.818 { 00:13:54.818 "dma_device_id": "system", 00:13:54.818 "dma_device_type": 1 00:13:54.818 }, 00:13:54.818 { 00:13:54.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.818 "dma_device_type": 2 00:13:54.818 } 00:13:54.818 ], 00:13:54.818 "driver_specific": {} 00:13:54.818 } 00:13:54.818 ] 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.818 "name": "Existed_Raid", 00:13:54.818 "uuid": "8f118a23-cb7e-4e68-86f0-321b6bc03ce3", 00:13:54.818 "strip_size_kb": 64, 00:13:54.818 "state": "online", 00:13:54.818 "raid_level": "raid0", 00:13:54.818 "superblock": true, 00:13:54.818 "num_base_bdevs": 3, 00:13:54.818 "num_base_bdevs_discovered": 3, 00:13:54.818 "num_base_bdevs_operational": 3, 00:13:54.818 "base_bdevs_list": [ 00:13:54.818 { 00:13:54.818 "name": "BaseBdev1", 00:13:54.818 "uuid": "8afcf448-8f99-4280-86c6-141be938c24b", 00:13:54.818 "is_configured": true, 00:13:54.818 "data_offset": 2048, 00:13:54.818 "data_size": 63488 00:13:54.818 }, 00:13:54.818 { 00:13:54.818 "name": "BaseBdev2", 00:13:54.818 "uuid": "08cf0073-4aa6-495b-b548-3075b1c3f566", 00:13:54.818 "is_configured": true, 00:13:54.818 "data_offset": 2048, 00:13:54.818 "data_size": 63488 00:13:54.818 }, 00:13:54.818 { 00:13:54.818 "name": "BaseBdev3", 00:13:54.818 "uuid": "615e1aa2-0c1a-42ee-83eb-8900a0689a2d", 00:13:54.818 "is_configured": true, 00:13:54.818 "data_offset": 2048, 00:13:54.818 "data_size": 63488 00:13:54.818 } 00:13:54.818 ] 00:13:54.818 }' 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.818 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.384 [2024-11-06 09:06:31.728866] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.384 "name": "Existed_Raid", 00:13:55.384 "aliases": [ 00:13:55.384 "8f118a23-cb7e-4e68-86f0-321b6bc03ce3" 00:13:55.384 ], 00:13:55.384 "product_name": "Raid Volume", 00:13:55.384 "block_size": 512, 00:13:55.384 "num_blocks": 190464, 00:13:55.384 "uuid": "8f118a23-cb7e-4e68-86f0-321b6bc03ce3", 00:13:55.384 "assigned_rate_limits": { 00:13:55.384 "rw_ios_per_sec": 0, 00:13:55.384 "rw_mbytes_per_sec": 0, 00:13:55.384 "r_mbytes_per_sec": 0, 00:13:55.384 "w_mbytes_per_sec": 0 00:13:55.384 }, 00:13:55.384 "claimed": false, 00:13:55.384 "zoned": false, 00:13:55.384 "supported_io_types": { 00:13:55.384 "read": true, 00:13:55.384 "write": true, 00:13:55.384 "unmap": true, 00:13:55.384 "flush": true, 00:13:55.384 "reset": true, 00:13:55.384 "nvme_admin": false, 00:13:55.384 "nvme_io": false, 00:13:55.384 "nvme_io_md": false, 00:13:55.384 "write_zeroes": true, 00:13:55.384 "zcopy": false, 00:13:55.384 "get_zone_info": false, 00:13:55.384 "zone_management": false, 00:13:55.384 "zone_append": false, 00:13:55.384 "compare": false, 00:13:55.384 "compare_and_write": false, 00:13:55.384 "abort": false, 00:13:55.384 "seek_hole": false, 00:13:55.384 "seek_data": false, 00:13:55.384 "copy": false, 00:13:55.384 "nvme_iov_md": false 00:13:55.384 }, 00:13:55.384 "memory_domains": [ 00:13:55.384 { 00:13:55.384 "dma_device_id": "system", 00:13:55.384 "dma_device_type": 1 00:13:55.384 }, 00:13:55.384 { 00:13:55.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.384 "dma_device_type": 2 00:13:55.384 }, 00:13:55.384 { 00:13:55.384 "dma_device_id": "system", 00:13:55.384 "dma_device_type": 1 00:13:55.384 }, 00:13:55.384 { 00:13:55.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.384 "dma_device_type": 2 00:13:55.384 }, 00:13:55.384 { 00:13:55.384 "dma_device_id": "system", 00:13:55.384 "dma_device_type": 1 00:13:55.384 }, 00:13:55.384 { 00:13:55.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.384 "dma_device_type": 2 00:13:55.384 } 00:13:55.384 ], 00:13:55.384 "driver_specific": { 00:13:55.384 "raid": { 00:13:55.384 "uuid": "8f118a23-cb7e-4e68-86f0-321b6bc03ce3", 00:13:55.384 "strip_size_kb": 64, 00:13:55.384 "state": "online", 00:13:55.384 "raid_level": "raid0", 00:13:55.384 "superblock": true, 00:13:55.384 "num_base_bdevs": 3, 00:13:55.384 "num_base_bdevs_discovered": 3, 00:13:55.384 "num_base_bdevs_operational": 3, 00:13:55.384 "base_bdevs_list": [ 00:13:55.384 { 00:13:55.384 "name": "BaseBdev1", 00:13:55.384 "uuid": "8afcf448-8f99-4280-86c6-141be938c24b", 00:13:55.384 "is_configured": true, 00:13:55.384 "data_offset": 2048, 00:13:55.384 "data_size": 63488 00:13:55.384 }, 00:13:55.384 { 00:13:55.384 "name": "BaseBdev2", 00:13:55.384 "uuid": "08cf0073-4aa6-495b-b548-3075b1c3f566", 00:13:55.384 "is_configured": true, 00:13:55.384 "data_offset": 2048, 00:13:55.384 "data_size": 63488 00:13:55.384 }, 00:13:55.384 { 00:13:55.384 "name": "BaseBdev3", 00:13:55.384 "uuid": "615e1aa2-0c1a-42ee-83eb-8900a0689a2d", 00:13:55.384 "is_configured": true, 00:13:55.384 "data_offset": 2048, 00:13:55.384 "data_size": 63488 00:13:55.384 } 00:13:55.384 ] 00:13:55.384 } 00:13:55.384 } 00:13:55.384 }' 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:55.384 BaseBdev2 00:13:55.384 BaseBdev3' 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:55.384 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.385 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.385 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.385 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:55.644 09:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.644 [2024-11-06 09:06:32.032594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.644 [2024-11-06 09:06:32.032768] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.644 [2024-11-06 09:06:32.032953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.644 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.901 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.901 "name": "Existed_Raid", 00:13:55.901 "uuid": "8f118a23-cb7e-4e68-86f0-321b6bc03ce3", 00:13:55.901 "strip_size_kb": 64, 00:13:55.901 "state": "offline", 00:13:55.901 "raid_level": "raid0", 00:13:55.901 "superblock": true, 00:13:55.901 "num_base_bdevs": 3, 00:13:55.902 "num_base_bdevs_discovered": 2, 00:13:55.902 "num_base_bdevs_operational": 2, 00:13:55.902 "base_bdevs_list": [ 00:13:55.902 { 00:13:55.902 "name": null, 00:13:55.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.902 "is_configured": false, 00:13:55.902 "data_offset": 0, 00:13:55.902 "data_size": 63488 00:13:55.902 }, 00:13:55.902 { 00:13:55.902 "name": "BaseBdev2", 00:13:55.902 "uuid": "08cf0073-4aa6-495b-b548-3075b1c3f566", 00:13:55.902 "is_configured": true, 00:13:55.902 "data_offset": 2048, 00:13:55.902 "data_size": 63488 00:13:55.902 }, 00:13:55.902 { 00:13:55.902 "name": "BaseBdev3", 00:13:55.902 "uuid": "615e1aa2-0c1a-42ee-83eb-8900a0689a2d", 00:13:55.902 "is_configured": true, 00:13:55.902 "data_offset": 2048, 00:13:55.902 "data_size": 63488 00:13:55.902 } 00:13:55.902 ] 00:13:55.902 }' 00:13:55.902 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.902 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.159 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:56.159 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:56.159 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:56.159 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.159 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.159 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.159 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.159 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:56.159 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:56.159 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:56.160 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.160 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.160 [2024-11-06 09:06:32.662679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.417 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.418 [2024-11-06 09:06:32.813713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:56.418 [2024-11-06 09:06:32.813974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:56.418 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.418 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:56.418 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:56.418 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:56.418 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.418 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.418 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.418 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.682 BaseBdev2 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:56.682 09:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.682 [ 00:13:56.682 { 00:13:56.682 "name": "BaseBdev2", 00:13:56.682 "aliases": [ 00:13:56.682 "1b82663b-c544-4645-af7a-426853a2df53" 00:13:56.682 ], 00:13:56.682 "product_name": "Malloc disk", 00:13:56.682 "block_size": 512, 00:13:56.682 "num_blocks": 65536, 00:13:56.682 "uuid": "1b82663b-c544-4645-af7a-426853a2df53", 00:13:56.682 "assigned_rate_limits": { 00:13:56.682 "rw_ios_per_sec": 0, 00:13:56.682 "rw_mbytes_per_sec": 0, 00:13:56.682 "r_mbytes_per_sec": 0, 00:13:56.682 "w_mbytes_per_sec": 0 00:13:56.682 }, 00:13:56.682 "claimed": false, 00:13:56.682 "zoned": false, 00:13:56.682 "supported_io_types": { 00:13:56.682 "read": true, 00:13:56.682 "write": true, 00:13:56.682 "unmap": true, 00:13:56.682 "flush": true, 00:13:56.682 "reset": true, 00:13:56.682 "nvme_admin": false, 00:13:56.682 "nvme_io": false, 00:13:56.682 "nvme_io_md": false, 00:13:56.682 "write_zeroes": true, 00:13:56.682 "zcopy": true, 00:13:56.682 "get_zone_info": false, 00:13:56.682 "zone_management": false, 00:13:56.682 "zone_append": false, 00:13:56.682 "compare": false, 00:13:56.682 "compare_and_write": false, 00:13:56.682 "abort": true, 00:13:56.682 "seek_hole": false, 00:13:56.682 "seek_data": false, 00:13:56.682 "copy": true, 00:13:56.682 "nvme_iov_md": false 00:13:56.682 }, 00:13:56.682 "memory_domains": [ 00:13:56.682 { 00:13:56.682 "dma_device_id": "system", 00:13:56.682 "dma_device_type": 1 00:13:56.682 }, 00:13:56.682 { 00:13:56.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.682 "dma_device_type": 2 00:13:56.682 } 00:13:56.682 ], 00:13:56.682 "driver_specific": {} 00:13:56.682 } 00:13:56.682 ] 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.682 BaseBdev3 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.682 [ 00:13:56.682 { 00:13:56.682 "name": "BaseBdev3", 00:13:56.682 "aliases": [ 00:13:56.682 "25f18986-32df-47c3-a282-5d1c4387ed4f" 00:13:56.682 ], 00:13:56.682 "product_name": "Malloc disk", 00:13:56.682 "block_size": 512, 00:13:56.682 "num_blocks": 65536, 00:13:56.682 "uuid": "25f18986-32df-47c3-a282-5d1c4387ed4f", 00:13:56.682 "assigned_rate_limits": { 00:13:56.682 "rw_ios_per_sec": 0, 00:13:56.682 "rw_mbytes_per_sec": 0, 00:13:56.682 "r_mbytes_per_sec": 0, 00:13:56.682 "w_mbytes_per_sec": 0 00:13:56.682 }, 00:13:56.682 "claimed": false, 00:13:56.682 "zoned": false, 00:13:56.682 "supported_io_types": { 00:13:56.682 "read": true, 00:13:56.682 "write": true, 00:13:56.682 "unmap": true, 00:13:56.682 "flush": true, 00:13:56.682 "reset": true, 00:13:56.682 "nvme_admin": false, 00:13:56.682 "nvme_io": false, 00:13:56.682 "nvme_io_md": false, 00:13:56.682 "write_zeroes": true, 00:13:56.682 "zcopy": true, 00:13:56.682 "get_zone_info": false, 00:13:56.682 "zone_management": false, 00:13:56.682 "zone_append": false, 00:13:56.682 "compare": false, 00:13:56.682 "compare_and_write": false, 00:13:56.682 "abort": true, 00:13:56.682 "seek_hole": false, 00:13:56.682 "seek_data": false, 00:13:56.682 "copy": true, 00:13:56.682 "nvme_iov_md": false 00:13:56.682 }, 00:13:56.682 "memory_domains": [ 00:13:56.682 { 00:13:56.682 "dma_device_id": "system", 00:13:56.682 "dma_device_type": 1 00:13:56.682 }, 00:13:56.682 { 00:13:56.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.682 "dma_device_type": 2 00:13:56.682 } 00:13:56.682 ], 00:13:56.682 "driver_specific": {} 00:13:56.682 } 00:13:56.682 ] 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.682 [2024-11-06 09:06:33.108103] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.682 [2024-11-06 09:06:33.108285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.682 [2024-11-06 09:06:33.108423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.682 [2024-11-06 09:06:33.110884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.682 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.683 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.683 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.683 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.683 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.683 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.683 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.683 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.683 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.683 "name": "Existed_Raid", 00:13:56.683 "uuid": "41b2bfae-0279-4309-900f-26e4bb4484f5", 00:13:56.683 "strip_size_kb": 64, 00:13:56.683 "state": "configuring", 00:13:56.683 "raid_level": "raid0", 00:13:56.683 "superblock": true, 00:13:56.683 "num_base_bdevs": 3, 00:13:56.683 "num_base_bdevs_discovered": 2, 00:13:56.683 "num_base_bdevs_operational": 3, 00:13:56.683 "base_bdevs_list": [ 00:13:56.683 { 00:13:56.683 "name": "BaseBdev1", 00:13:56.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.683 "is_configured": false, 00:13:56.683 "data_offset": 0, 00:13:56.683 "data_size": 0 00:13:56.683 }, 00:13:56.683 { 00:13:56.683 "name": "BaseBdev2", 00:13:56.683 "uuid": "1b82663b-c544-4645-af7a-426853a2df53", 00:13:56.683 "is_configured": true, 00:13:56.683 "data_offset": 2048, 00:13:56.683 "data_size": 63488 00:13:56.683 }, 00:13:56.683 { 00:13:56.683 "name": "BaseBdev3", 00:13:56.683 "uuid": "25f18986-32df-47c3-a282-5d1c4387ed4f", 00:13:56.683 "is_configured": true, 00:13:56.683 "data_offset": 2048, 00:13:56.683 "data_size": 63488 00:13:56.683 } 00:13:56.683 ] 00:13:56.683 }' 00:13:56.683 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.683 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.247 [2024-11-06 09:06:33.636206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.247 "name": "Existed_Raid", 00:13:57.247 "uuid": "41b2bfae-0279-4309-900f-26e4bb4484f5", 00:13:57.247 "strip_size_kb": 64, 00:13:57.247 "state": "configuring", 00:13:57.247 "raid_level": "raid0", 00:13:57.247 "superblock": true, 00:13:57.247 "num_base_bdevs": 3, 00:13:57.247 "num_base_bdevs_discovered": 1, 00:13:57.247 "num_base_bdevs_operational": 3, 00:13:57.247 "base_bdevs_list": [ 00:13:57.247 { 00:13:57.247 "name": "BaseBdev1", 00:13:57.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.247 "is_configured": false, 00:13:57.247 "data_offset": 0, 00:13:57.247 "data_size": 0 00:13:57.247 }, 00:13:57.247 { 00:13:57.247 "name": null, 00:13:57.247 "uuid": "1b82663b-c544-4645-af7a-426853a2df53", 00:13:57.247 "is_configured": false, 00:13:57.247 "data_offset": 0, 00:13:57.247 "data_size": 63488 00:13:57.247 }, 00:13:57.247 { 00:13:57.247 "name": "BaseBdev3", 00:13:57.247 "uuid": "25f18986-32df-47c3-a282-5d1c4387ed4f", 00:13:57.247 "is_configured": true, 00:13:57.247 "data_offset": 2048, 00:13:57.247 "data_size": 63488 00:13:57.247 } 00:13:57.247 ] 00:13:57.247 }' 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.247 09:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.813 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.813 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:57.813 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.813 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.813 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.813 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.814 [2024-11-06 09:06:34.280590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.814 BaseBdev1 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.814 [ 00:13:57.814 { 00:13:57.814 "name": "BaseBdev1", 00:13:57.814 "aliases": [ 00:13:57.814 "079d8fe9-1456-47cc-bc46-eb57fa79a671" 00:13:57.814 ], 00:13:57.814 "product_name": "Malloc disk", 00:13:57.814 "block_size": 512, 00:13:57.814 "num_blocks": 65536, 00:13:57.814 "uuid": "079d8fe9-1456-47cc-bc46-eb57fa79a671", 00:13:57.814 "assigned_rate_limits": { 00:13:57.814 "rw_ios_per_sec": 0, 00:13:57.814 "rw_mbytes_per_sec": 0, 00:13:57.814 "r_mbytes_per_sec": 0, 00:13:57.814 "w_mbytes_per_sec": 0 00:13:57.814 }, 00:13:57.814 "claimed": true, 00:13:57.814 "claim_type": "exclusive_write", 00:13:57.814 "zoned": false, 00:13:57.814 "supported_io_types": { 00:13:57.814 "read": true, 00:13:57.814 "write": true, 00:13:57.814 "unmap": true, 00:13:57.814 "flush": true, 00:13:57.814 "reset": true, 00:13:57.814 "nvme_admin": false, 00:13:57.814 "nvme_io": false, 00:13:57.814 "nvme_io_md": false, 00:13:57.814 "write_zeroes": true, 00:13:57.814 "zcopy": true, 00:13:57.814 "get_zone_info": false, 00:13:57.814 "zone_management": false, 00:13:57.814 "zone_append": false, 00:13:57.814 "compare": false, 00:13:57.814 "compare_and_write": false, 00:13:57.814 "abort": true, 00:13:57.814 "seek_hole": false, 00:13:57.814 "seek_data": false, 00:13:57.814 "copy": true, 00:13:57.814 "nvme_iov_md": false 00:13:57.814 }, 00:13:57.814 "memory_domains": [ 00:13:57.814 { 00:13:57.814 "dma_device_id": "system", 00:13:57.814 "dma_device_type": 1 00:13:57.814 }, 00:13:57.814 { 00:13:57.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.814 "dma_device_type": 2 00:13:57.814 } 00:13:57.814 ], 00:13:57.814 "driver_specific": {} 00:13:57.814 } 00:13:57.814 ] 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.814 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.072 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.072 "name": "Existed_Raid", 00:13:58.072 "uuid": "41b2bfae-0279-4309-900f-26e4bb4484f5", 00:13:58.072 "strip_size_kb": 64, 00:13:58.072 "state": "configuring", 00:13:58.072 "raid_level": "raid0", 00:13:58.072 "superblock": true, 00:13:58.072 "num_base_bdevs": 3, 00:13:58.072 "num_base_bdevs_discovered": 2, 00:13:58.072 "num_base_bdevs_operational": 3, 00:13:58.073 "base_bdevs_list": [ 00:13:58.073 { 00:13:58.073 "name": "BaseBdev1", 00:13:58.073 "uuid": "079d8fe9-1456-47cc-bc46-eb57fa79a671", 00:13:58.073 "is_configured": true, 00:13:58.073 "data_offset": 2048, 00:13:58.073 "data_size": 63488 00:13:58.073 }, 00:13:58.073 { 00:13:58.073 "name": null, 00:13:58.073 "uuid": "1b82663b-c544-4645-af7a-426853a2df53", 00:13:58.073 "is_configured": false, 00:13:58.073 "data_offset": 0, 00:13:58.073 "data_size": 63488 00:13:58.073 }, 00:13:58.073 { 00:13:58.073 "name": "BaseBdev3", 00:13:58.073 "uuid": "25f18986-32df-47c3-a282-5d1c4387ed4f", 00:13:58.073 "is_configured": true, 00:13:58.073 "data_offset": 2048, 00:13:58.073 "data_size": 63488 00:13:58.073 } 00:13:58.073 ] 00:13:58.073 }' 00:13:58.073 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.073 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.330 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.330 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:58.330 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.330 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.330 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.588 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:58.588 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:58.588 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.588 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.588 [2024-11-06 09:06:34.872958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:58.588 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.588 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.589 "name": "Existed_Raid", 00:13:58.589 "uuid": "41b2bfae-0279-4309-900f-26e4bb4484f5", 00:13:58.589 "strip_size_kb": 64, 00:13:58.589 "state": "configuring", 00:13:58.589 "raid_level": "raid0", 00:13:58.589 "superblock": true, 00:13:58.589 "num_base_bdevs": 3, 00:13:58.589 "num_base_bdevs_discovered": 1, 00:13:58.589 "num_base_bdevs_operational": 3, 00:13:58.589 "base_bdevs_list": [ 00:13:58.589 { 00:13:58.589 "name": "BaseBdev1", 00:13:58.589 "uuid": "079d8fe9-1456-47cc-bc46-eb57fa79a671", 00:13:58.589 "is_configured": true, 00:13:58.589 "data_offset": 2048, 00:13:58.589 "data_size": 63488 00:13:58.589 }, 00:13:58.589 { 00:13:58.589 "name": null, 00:13:58.589 "uuid": "1b82663b-c544-4645-af7a-426853a2df53", 00:13:58.589 "is_configured": false, 00:13:58.589 "data_offset": 0, 00:13:58.589 "data_size": 63488 00:13:58.589 }, 00:13:58.589 { 00:13:58.589 "name": null, 00:13:58.589 "uuid": "25f18986-32df-47c3-a282-5d1c4387ed4f", 00:13:58.589 "is_configured": false, 00:13:58.589 "data_offset": 0, 00:13:58.589 "data_size": 63488 00:13:58.589 } 00:13:58.589 ] 00:13:58.589 }' 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.589 09:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.155 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:59.155 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.155 09:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.155 09:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.155 09:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.155 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:59.155 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:59.155 09:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.155 09:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.155 [2024-11-06 09:06:35.473169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.156 "name": "Existed_Raid", 00:13:59.156 "uuid": "41b2bfae-0279-4309-900f-26e4bb4484f5", 00:13:59.156 "strip_size_kb": 64, 00:13:59.156 "state": "configuring", 00:13:59.156 "raid_level": "raid0", 00:13:59.156 "superblock": true, 00:13:59.156 "num_base_bdevs": 3, 00:13:59.156 "num_base_bdevs_discovered": 2, 00:13:59.156 "num_base_bdevs_operational": 3, 00:13:59.156 "base_bdevs_list": [ 00:13:59.156 { 00:13:59.156 "name": "BaseBdev1", 00:13:59.156 "uuid": "079d8fe9-1456-47cc-bc46-eb57fa79a671", 00:13:59.156 "is_configured": true, 00:13:59.156 "data_offset": 2048, 00:13:59.156 "data_size": 63488 00:13:59.156 }, 00:13:59.156 { 00:13:59.156 "name": null, 00:13:59.156 "uuid": "1b82663b-c544-4645-af7a-426853a2df53", 00:13:59.156 "is_configured": false, 00:13:59.156 "data_offset": 0, 00:13:59.156 "data_size": 63488 00:13:59.156 }, 00:13:59.156 { 00:13:59.156 "name": "BaseBdev3", 00:13:59.156 "uuid": "25f18986-32df-47c3-a282-5d1c4387ed4f", 00:13:59.156 "is_configured": true, 00:13:59.156 "data_offset": 2048, 00:13:59.156 "data_size": 63488 00:13:59.156 } 00:13:59.156 ] 00:13:59.156 }' 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.156 09:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.721 [2024-11-06 09:06:36.065410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.721 "name": "Existed_Raid", 00:13:59.721 "uuid": "41b2bfae-0279-4309-900f-26e4bb4484f5", 00:13:59.721 "strip_size_kb": 64, 00:13:59.721 "state": "configuring", 00:13:59.721 "raid_level": "raid0", 00:13:59.721 "superblock": true, 00:13:59.721 "num_base_bdevs": 3, 00:13:59.721 "num_base_bdevs_discovered": 1, 00:13:59.721 "num_base_bdevs_operational": 3, 00:13:59.721 "base_bdevs_list": [ 00:13:59.721 { 00:13:59.721 "name": null, 00:13:59.721 "uuid": "079d8fe9-1456-47cc-bc46-eb57fa79a671", 00:13:59.721 "is_configured": false, 00:13:59.721 "data_offset": 0, 00:13:59.721 "data_size": 63488 00:13:59.721 }, 00:13:59.721 { 00:13:59.721 "name": null, 00:13:59.721 "uuid": "1b82663b-c544-4645-af7a-426853a2df53", 00:13:59.721 "is_configured": false, 00:13:59.721 "data_offset": 0, 00:13:59.721 "data_size": 63488 00:13:59.721 }, 00:13:59.721 { 00:13:59.721 "name": "BaseBdev3", 00:13:59.721 "uuid": "25f18986-32df-47c3-a282-5d1c4387ed4f", 00:13:59.721 "is_configured": true, 00:13:59.721 "data_offset": 2048, 00:13:59.721 "data_size": 63488 00:13:59.721 } 00:13:59.721 ] 00:13:59.721 }' 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.721 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.288 [2024-11-06 09:06:36.759069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.288 "name": "Existed_Raid", 00:14:00.288 "uuid": "41b2bfae-0279-4309-900f-26e4bb4484f5", 00:14:00.288 "strip_size_kb": 64, 00:14:00.288 "state": "configuring", 00:14:00.288 "raid_level": "raid0", 00:14:00.288 "superblock": true, 00:14:00.288 "num_base_bdevs": 3, 00:14:00.288 "num_base_bdevs_discovered": 2, 00:14:00.288 "num_base_bdevs_operational": 3, 00:14:00.288 "base_bdevs_list": [ 00:14:00.288 { 00:14:00.288 "name": null, 00:14:00.288 "uuid": "079d8fe9-1456-47cc-bc46-eb57fa79a671", 00:14:00.288 "is_configured": false, 00:14:00.288 "data_offset": 0, 00:14:00.288 "data_size": 63488 00:14:00.288 }, 00:14:00.288 { 00:14:00.288 "name": "BaseBdev2", 00:14:00.288 "uuid": "1b82663b-c544-4645-af7a-426853a2df53", 00:14:00.288 "is_configured": true, 00:14:00.288 "data_offset": 2048, 00:14:00.288 "data_size": 63488 00:14:00.288 }, 00:14:00.288 { 00:14:00.288 "name": "BaseBdev3", 00:14:00.288 "uuid": "25f18986-32df-47c3-a282-5d1c4387ed4f", 00:14:00.288 "is_configured": true, 00:14:00.288 "data_offset": 2048, 00:14:00.288 "data_size": 63488 00:14:00.288 } 00:14:00.288 ] 00:14:00.288 }' 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.288 09:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.853 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:00.853 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.853 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.853 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.853 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.853 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:00.853 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.853 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.853 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:00.853 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.853 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.112 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 079d8fe9-1456-47cc-bc46-eb57fa79a671 00:14:01.112 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.112 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.112 [2024-11-06 09:06:37.444461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:01.112 NewBaseBdev 00:14:01.112 [2024-11-06 09:06:37.445092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:01.112 [2024-11-06 09:06:37.445124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:01.112 [2024-11-06 09:06:37.445464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:01.112 [2024-11-06 09:06:37.445669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:01.112 [2024-11-06 09:06:37.445686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:01.112 [2024-11-06 09:06:37.445847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.112 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.112 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:01.112 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:14:01.112 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:01.112 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.113 [ 00:14:01.113 { 00:14:01.113 "name": "NewBaseBdev", 00:14:01.113 "aliases": [ 00:14:01.113 "079d8fe9-1456-47cc-bc46-eb57fa79a671" 00:14:01.113 ], 00:14:01.113 "product_name": "Malloc disk", 00:14:01.113 "block_size": 512, 00:14:01.113 "num_blocks": 65536, 00:14:01.113 "uuid": "079d8fe9-1456-47cc-bc46-eb57fa79a671", 00:14:01.113 "assigned_rate_limits": { 00:14:01.113 "rw_ios_per_sec": 0, 00:14:01.113 "rw_mbytes_per_sec": 0, 00:14:01.113 "r_mbytes_per_sec": 0, 00:14:01.113 "w_mbytes_per_sec": 0 00:14:01.113 }, 00:14:01.113 "claimed": true, 00:14:01.113 "claim_type": "exclusive_write", 00:14:01.113 "zoned": false, 00:14:01.113 "supported_io_types": { 00:14:01.113 "read": true, 00:14:01.113 "write": true, 00:14:01.113 "unmap": true, 00:14:01.113 "flush": true, 00:14:01.113 "reset": true, 00:14:01.113 "nvme_admin": false, 00:14:01.113 "nvme_io": false, 00:14:01.113 "nvme_io_md": false, 00:14:01.113 "write_zeroes": true, 00:14:01.113 "zcopy": true, 00:14:01.113 "get_zone_info": false, 00:14:01.113 "zone_management": false, 00:14:01.113 "zone_append": false, 00:14:01.113 "compare": false, 00:14:01.113 "compare_and_write": false, 00:14:01.113 "abort": true, 00:14:01.113 "seek_hole": false, 00:14:01.113 "seek_data": false, 00:14:01.113 "copy": true, 00:14:01.113 "nvme_iov_md": false 00:14:01.113 }, 00:14:01.113 "memory_domains": [ 00:14:01.113 { 00:14:01.113 "dma_device_id": "system", 00:14:01.113 "dma_device_type": 1 00:14:01.113 }, 00:14:01.113 { 00:14:01.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.113 "dma_device_type": 2 00:14:01.113 } 00:14:01.113 ], 00:14:01.113 "driver_specific": {} 00:14:01.113 } 00:14:01.113 ] 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.113 "name": "Existed_Raid", 00:14:01.113 "uuid": "41b2bfae-0279-4309-900f-26e4bb4484f5", 00:14:01.113 "strip_size_kb": 64, 00:14:01.113 "state": "online", 00:14:01.113 "raid_level": "raid0", 00:14:01.113 "superblock": true, 00:14:01.113 "num_base_bdevs": 3, 00:14:01.113 "num_base_bdevs_discovered": 3, 00:14:01.113 "num_base_bdevs_operational": 3, 00:14:01.113 "base_bdevs_list": [ 00:14:01.113 { 00:14:01.113 "name": "NewBaseBdev", 00:14:01.113 "uuid": "079d8fe9-1456-47cc-bc46-eb57fa79a671", 00:14:01.113 "is_configured": true, 00:14:01.113 "data_offset": 2048, 00:14:01.113 "data_size": 63488 00:14:01.113 }, 00:14:01.113 { 00:14:01.113 "name": "BaseBdev2", 00:14:01.113 "uuid": "1b82663b-c544-4645-af7a-426853a2df53", 00:14:01.113 "is_configured": true, 00:14:01.113 "data_offset": 2048, 00:14:01.113 "data_size": 63488 00:14:01.113 }, 00:14:01.113 { 00:14:01.113 "name": "BaseBdev3", 00:14:01.113 "uuid": "25f18986-32df-47c3-a282-5d1c4387ed4f", 00:14:01.113 "is_configured": true, 00:14:01.113 "data_offset": 2048, 00:14:01.113 "data_size": 63488 00:14:01.113 } 00:14:01.113 ] 00:14:01.113 }' 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.113 09:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:01.681 [2024-11-06 09:06:38.029119] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:01.681 "name": "Existed_Raid", 00:14:01.681 "aliases": [ 00:14:01.681 "41b2bfae-0279-4309-900f-26e4bb4484f5" 00:14:01.681 ], 00:14:01.681 "product_name": "Raid Volume", 00:14:01.681 "block_size": 512, 00:14:01.681 "num_blocks": 190464, 00:14:01.681 "uuid": "41b2bfae-0279-4309-900f-26e4bb4484f5", 00:14:01.681 "assigned_rate_limits": { 00:14:01.681 "rw_ios_per_sec": 0, 00:14:01.681 "rw_mbytes_per_sec": 0, 00:14:01.681 "r_mbytes_per_sec": 0, 00:14:01.681 "w_mbytes_per_sec": 0 00:14:01.681 }, 00:14:01.681 "claimed": false, 00:14:01.681 "zoned": false, 00:14:01.681 "supported_io_types": { 00:14:01.681 "read": true, 00:14:01.681 "write": true, 00:14:01.681 "unmap": true, 00:14:01.681 "flush": true, 00:14:01.681 "reset": true, 00:14:01.681 "nvme_admin": false, 00:14:01.681 "nvme_io": false, 00:14:01.681 "nvme_io_md": false, 00:14:01.681 "write_zeroes": true, 00:14:01.681 "zcopy": false, 00:14:01.681 "get_zone_info": false, 00:14:01.681 "zone_management": false, 00:14:01.681 "zone_append": false, 00:14:01.681 "compare": false, 00:14:01.681 "compare_and_write": false, 00:14:01.681 "abort": false, 00:14:01.681 "seek_hole": false, 00:14:01.681 "seek_data": false, 00:14:01.681 "copy": false, 00:14:01.681 "nvme_iov_md": false 00:14:01.681 }, 00:14:01.681 "memory_domains": [ 00:14:01.681 { 00:14:01.681 "dma_device_id": "system", 00:14:01.681 "dma_device_type": 1 00:14:01.681 }, 00:14:01.681 { 00:14:01.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.681 "dma_device_type": 2 00:14:01.681 }, 00:14:01.681 { 00:14:01.681 "dma_device_id": "system", 00:14:01.681 "dma_device_type": 1 00:14:01.681 }, 00:14:01.681 { 00:14:01.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.681 "dma_device_type": 2 00:14:01.681 }, 00:14:01.681 { 00:14:01.681 "dma_device_id": "system", 00:14:01.681 "dma_device_type": 1 00:14:01.681 }, 00:14:01.681 { 00:14:01.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.681 "dma_device_type": 2 00:14:01.681 } 00:14:01.681 ], 00:14:01.681 "driver_specific": { 00:14:01.681 "raid": { 00:14:01.681 "uuid": "41b2bfae-0279-4309-900f-26e4bb4484f5", 00:14:01.681 "strip_size_kb": 64, 00:14:01.681 "state": "online", 00:14:01.681 "raid_level": "raid0", 00:14:01.681 "superblock": true, 00:14:01.681 "num_base_bdevs": 3, 00:14:01.681 "num_base_bdevs_discovered": 3, 00:14:01.681 "num_base_bdevs_operational": 3, 00:14:01.681 "base_bdevs_list": [ 00:14:01.681 { 00:14:01.681 "name": "NewBaseBdev", 00:14:01.681 "uuid": "079d8fe9-1456-47cc-bc46-eb57fa79a671", 00:14:01.681 "is_configured": true, 00:14:01.681 "data_offset": 2048, 00:14:01.681 "data_size": 63488 00:14:01.681 }, 00:14:01.681 { 00:14:01.681 "name": "BaseBdev2", 00:14:01.681 "uuid": "1b82663b-c544-4645-af7a-426853a2df53", 00:14:01.681 "is_configured": true, 00:14:01.681 "data_offset": 2048, 00:14:01.681 "data_size": 63488 00:14:01.681 }, 00:14:01.681 { 00:14:01.681 "name": "BaseBdev3", 00:14:01.681 "uuid": "25f18986-32df-47c3-a282-5d1c4387ed4f", 00:14:01.681 "is_configured": true, 00:14:01.681 "data_offset": 2048, 00:14:01.681 "data_size": 63488 00:14:01.681 } 00:14:01.681 ] 00:14:01.681 } 00:14:01.681 } 00:14:01.681 }' 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:01.681 BaseBdev2 00:14:01.681 BaseBdev3' 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.681 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:01.940 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.941 [2024-11-06 09:06:38.364854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.941 [2024-11-06 09:06:38.365024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.941 [2024-11-06 09:06:38.365224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.941 [2024-11-06 09:06:38.365423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.941 [2024-11-06 09:06:38.365563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64364 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 64364 ']' 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 64364 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64364 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:01.941 killing process with pid 64364 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64364' 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 64364 00:14:01.941 [2024-11-06 09:06:38.403774] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:01.941 09:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 64364 00:14:02.199 [2024-11-06 09:06:38.685372] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.572 ************************************ 00:14:03.572 END TEST raid_state_function_test_sb 00:14:03.572 ************************************ 00:14:03.572 09:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:03.572 00:14:03.572 real 0m12.022s 00:14:03.572 user 0m19.894s 00:14:03.572 sys 0m1.671s 00:14:03.572 09:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:03.572 09:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.572 09:06:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:14:03.572 09:06:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:03.572 09:06:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:03.572 09:06:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.572 ************************************ 00:14:03.572 START TEST raid_superblock_test 00:14:03.572 ************************************ 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65001 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65001 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 65001 ']' 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:03.572 09:06:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.572 [2024-11-06 09:06:39.938429] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:14:03.572 [2024-11-06 09:06:39.938934] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65001 ] 00:14:03.829 [2024-11-06 09:06:40.124593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.829 [2024-11-06 09:06:40.258976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.088 [2024-11-06 09:06:40.478467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.088 [2024-11-06 09:06:40.478543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.653 09:06:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.653 malloc1 00:14:04.653 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.653 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.654 [2024-11-06 09:06:41.033550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:04.654 [2024-11-06 09:06:41.033801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.654 [2024-11-06 09:06:41.034012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:04.654 [2024-11-06 09:06:41.034193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.654 [2024-11-06 09:06:41.037196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.654 [2024-11-06 09:06:41.037376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:04.654 pt1 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.654 malloc2 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.654 [2024-11-06 09:06:41.090435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:04.654 [2024-11-06 09:06:41.090514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.654 [2024-11-06 09:06:41.090547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:04.654 [2024-11-06 09:06:41.090561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.654 [2024-11-06 09:06:41.093447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.654 [2024-11-06 09:06:41.093506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:04.654 pt2 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.654 malloc3 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.654 [2024-11-06 09:06:41.159212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:04.654 [2024-11-06 09:06:41.159412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.654 [2024-11-06 09:06:41.159625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:04.654 [2024-11-06 09:06:41.159771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.654 [2024-11-06 09:06:41.162706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.654 [2024-11-06 09:06:41.162752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:04.654 pt3 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.654 [2024-11-06 09:06:41.171532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:04.654 [2024-11-06 09:06:41.174257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:04.654 [2024-11-06 09:06:41.174497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:04.654 [2024-11-06 09:06:41.174920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:04.654 [2024-11-06 09:06:41.175064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:04.654 [2024-11-06 09:06:41.175582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:04.654 [2024-11-06 09:06:41.175982] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:04.654 [2024-11-06 09:06:41.176007] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:04.654 [2024-11-06 09:06:41.176241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.654 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.913 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.913 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.913 "name": "raid_bdev1", 00:14:04.913 "uuid": "9be4f4c9-d4be-4c4c-be48-484a11a383dd", 00:14:04.913 "strip_size_kb": 64, 00:14:04.913 "state": "online", 00:14:04.913 "raid_level": "raid0", 00:14:04.913 "superblock": true, 00:14:04.913 "num_base_bdevs": 3, 00:14:04.913 "num_base_bdevs_discovered": 3, 00:14:04.913 "num_base_bdevs_operational": 3, 00:14:04.913 "base_bdevs_list": [ 00:14:04.913 { 00:14:04.913 "name": "pt1", 00:14:04.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:04.913 "is_configured": true, 00:14:04.913 "data_offset": 2048, 00:14:04.913 "data_size": 63488 00:14:04.913 }, 00:14:04.913 { 00:14:04.913 "name": "pt2", 00:14:04.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:04.913 "is_configured": true, 00:14:04.913 "data_offset": 2048, 00:14:04.913 "data_size": 63488 00:14:04.913 }, 00:14:04.913 { 00:14:04.913 "name": "pt3", 00:14:04.913 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:04.913 "is_configured": true, 00:14:04.913 "data_offset": 2048, 00:14:04.913 "data_size": 63488 00:14:04.913 } 00:14:04.913 ] 00:14:04.913 }' 00:14:04.913 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.913 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.235 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:05.235 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:05.235 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:05.235 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:05.235 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:05.235 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:05.235 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:05.235 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.235 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.235 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.235 [2024-11-06 09:06:41.712807] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.235 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.537 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:05.537 "name": "raid_bdev1", 00:14:05.537 "aliases": [ 00:14:05.537 "9be4f4c9-d4be-4c4c-be48-484a11a383dd" 00:14:05.537 ], 00:14:05.537 "product_name": "Raid Volume", 00:14:05.537 "block_size": 512, 00:14:05.537 "num_blocks": 190464, 00:14:05.537 "uuid": "9be4f4c9-d4be-4c4c-be48-484a11a383dd", 00:14:05.537 "assigned_rate_limits": { 00:14:05.537 "rw_ios_per_sec": 0, 00:14:05.537 "rw_mbytes_per_sec": 0, 00:14:05.537 "r_mbytes_per_sec": 0, 00:14:05.537 "w_mbytes_per_sec": 0 00:14:05.537 }, 00:14:05.537 "claimed": false, 00:14:05.537 "zoned": false, 00:14:05.537 "supported_io_types": { 00:14:05.537 "read": true, 00:14:05.537 "write": true, 00:14:05.537 "unmap": true, 00:14:05.537 "flush": true, 00:14:05.537 "reset": true, 00:14:05.537 "nvme_admin": false, 00:14:05.538 "nvme_io": false, 00:14:05.538 "nvme_io_md": false, 00:14:05.538 "write_zeroes": true, 00:14:05.538 "zcopy": false, 00:14:05.538 "get_zone_info": false, 00:14:05.538 "zone_management": false, 00:14:05.538 "zone_append": false, 00:14:05.538 "compare": false, 00:14:05.538 "compare_and_write": false, 00:14:05.538 "abort": false, 00:14:05.538 "seek_hole": false, 00:14:05.538 "seek_data": false, 00:14:05.538 "copy": false, 00:14:05.538 "nvme_iov_md": false 00:14:05.538 }, 00:14:05.538 "memory_domains": [ 00:14:05.538 { 00:14:05.538 "dma_device_id": "system", 00:14:05.538 "dma_device_type": 1 00:14:05.538 }, 00:14:05.538 { 00:14:05.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.538 "dma_device_type": 2 00:14:05.538 }, 00:14:05.538 { 00:14:05.538 "dma_device_id": "system", 00:14:05.538 "dma_device_type": 1 00:14:05.538 }, 00:14:05.538 { 00:14:05.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.538 "dma_device_type": 2 00:14:05.538 }, 00:14:05.538 { 00:14:05.538 "dma_device_id": "system", 00:14:05.538 "dma_device_type": 1 00:14:05.538 }, 00:14:05.538 { 00:14:05.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.538 "dma_device_type": 2 00:14:05.538 } 00:14:05.538 ], 00:14:05.538 "driver_specific": { 00:14:05.538 "raid": { 00:14:05.538 "uuid": "9be4f4c9-d4be-4c4c-be48-484a11a383dd", 00:14:05.538 "strip_size_kb": 64, 00:14:05.538 "state": "online", 00:14:05.538 "raid_level": "raid0", 00:14:05.538 "superblock": true, 00:14:05.538 "num_base_bdevs": 3, 00:14:05.538 "num_base_bdevs_discovered": 3, 00:14:05.538 "num_base_bdevs_operational": 3, 00:14:05.538 "base_bdevs_list": [ 00:14:05.538 { 00:14:05.538 "name": "pt1", 00:14:05.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:05.538 "is_configured": true, 00:14:05.538 "data_offset": 2048, 00:14:05.538 "data_size": 63488 00:14:05.538 }, 00:14:05.538 { 00:14:05.538 "name": "pt2", 00:14:05.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:05.538 "is_configured": true, 00:14:05.538 "data_offset": 2048, 00:14:05.538 "data_size": 63488 00:14:05.538 }, 00:14:05.538 { 00:14:05.538 "name": "pt3", 00:14:05.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:05.538 "is_configured": true, 00:14:05.538 "data_offset": 2048, 00:14:05.538 "data_size": 63488 00:14:05.538 } 00:14:05.538 ] 00:14:05.538 } 00:14:05.538 } 00:14:05.538 }' 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:05.538 pt2 00:14:05.538 pt3' 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.538 09:06:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.538 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:05.538 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:05.538 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.538 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:05.538 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.538 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.538 [2024-11-06 09:06:42.028833] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.538 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9be4f4c9-d4be-4c4c-be48-484a11a383dd 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9be4f4c9-d4be-4c4c-be48-484a11a383dd ']' 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.797 [2024-11-06 09:06:42.076477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.797 [2024-11-06 09:06:42.076662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:05.797 [2024-11-06 09:06:42.076967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:05.797 [2024-11-06 09:06:42.077217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:05.797 [2024-11-06 09:06:42.077245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.797 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.797 [2024-11-06 09:06:42.220603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:05.797 [2024-11-06 09:06:42.223287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:05.797 [2024-11-06 09:06:42.223362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:05.797 [2024-11-06 09:06:42.223433] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:05.797 [2024-11-06 09:06:42.223507] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:05.798 [2024-11-06 09:06:42.223544] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:05.798 [2024-11-06 09:06:42.223572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.798 [2024-11-06 09:06:42.223588] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:05.798 request: 00:14:05.798 { 00:14:05.798 "name": "raid_bdev1", 00:14:05.798 "raid_level": "raid0", 00:14:05.798 "base_bdevs": [ 00:14:05.798 "malloc1", 00:14:05.798 "malloc2", 00:14:05.798 "malloc3" 00:14:05.798 ], 00:14:05.798 "strip_size_kb": 64, 00:14:05.798 "superblock": false, 00:14:05.798 "method": "bdev_raid_create", 00:14:05.798 "req_id": 1 00:14:05.798 } 00:14:05.798 Got JSON-RPC error response 00:14:05.798 response: 00:14:05.798 { 00:14:05.798 "code": -17, 00:14:05.798 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:05.798 } 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.798 [2024-11-06 09:06:42.284625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:05.798 [2024-11-06 09:06:42.284852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.798 [2024-11-06 09:06:42.285040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:05.798 [2024-11-06 09:06:42.285201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.798 [2024-11-06 09:06:42.288398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.798 [2024-11-06 09:06:42.288562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:05.798 pt1 00:14:05.798 [2024-11-06 09:06:42.288869] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:05.798 [2024-11-06 09:06:42.288949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.798 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.056 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.056 "name": "raid_bdev1", 00:14:06.056 "uuid": "9be4f4c9-d4be-4c4c-be48-484a11a383dd", 00:14:06.056 "strip_size_kb": 64, 00:14:06.056 "state": "configuring", 00:14:06.056 "raid_level": "raid0", 00:14:06.056 "superblock": true, 00:14:06.056 "num_base_bdevs": 3, 00:14:06.056 "num_base_bdevs_discovered": 1, 00:14:06.056 "num_base_bdevs_operational": 3, 00:14:06.056 "base_bdevs_list": [ 00:14:06.056 { 00:14:06.056 "name": "pt1", 00:14:06.056 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.056 "is_configured": true, 00:14:06.056 "data_offset": 2048, 00:14:06.056 "data_size": 63488 00:14:06.056 }, 00:14:06.056 { 00:14:06.056 "name": null, 00:14:06.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.056 "is_configured": false, 00:14:06.056 "data_offset": 2048, 00:14:06.056 "data_size": 63488 00:14:06.056 }, 00:14:06.056 { 00:14:06.056 "name": null, 00:14:06.056 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.056 "is_configured": false, 00:14:06.056 "data_offset": 2048, 00:14:06.056 "data_size": 63488 00:14:06.056 } 00:14:06.056 ] 00:14:06.056 }' 00:14:06.056 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.056 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.314 [2024-11-06 09:06:42.817119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:06.314 [2024-11-06 09:06:42.817198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.314 [2024-11-06 09:06:42.817232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:06.314 [2024-11-06 09:06:42.817248] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.314 [2024-11-06 09:06:42.817794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.314 [2024-11-06 09:06:42.817864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:06.314 [2024-11-06 09:06:42.818026] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:06.314 [2024-11-06 09:06:42.818062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:06.314 pt2 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.314 [2024-11-06 09:06:42.825102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.314 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.572 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.572 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.572 "name": "raid_bdev1", 00:14:06.572 "uuid": "9be4f4c9-d4be-4c4c-be48-484a11a383dd", 00:14:06.572 "strip_size_kb": 64, 00:14:06.572 "state": "configuring", 00:14:06.572 "raid_level": "raid0", 00:14:06.572 "superblock": true, 00:14:06.572 "num_base_bdevs": 3, 00:14:06.572 "num_base_bdevs_discovered": 1, 00:14:06.572 "num_base_bdevs_operational": 3, 00:14:06.572 "base_bdevs_list": [ 00:14:06.572 { 00:14:06.572 "name": "pt1", 00:14:06.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:06.572 "is_configured": true, 00:14:06.572 "data_offset": 2048, 00:14:06.572 "data_size": 63488 00:14:06.572 }, 00:14:06.572 { 00:14:06.572 "name": null, 00:14:06.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:06.572 "is_configured": false, 00:14:06.572 "data_offset": 0, 00:14:06.572 "data_size": 63488 00:14:06.572 }, 00:14:06.572 { 00:14:06.572 "name": null, 00:14:06.572 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:06.572 "is_configured": false, 00:14:06.572 "data_offset": 2048, 00:14:06.572 "data_size": 63488 00:14:06.572 } 00:14:06.572 ] 00:14:06.572 }' 00:14:06.572 09:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.572 09:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.829 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:06.829 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:06.829 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:06.829 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.829 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.089 [2024-11-06 09:06:43.365285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:07.089 [2024-11-06 09:06:43.365520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.089 [2024-11-06 09:06:43.365709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:07.089 [2024-11-06 09:06:43.365902] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.089 [2024-11-06 09:06:43.366559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.089 [2024-11-06 09:06:43.366597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:07.089 [2024-11-06 09:06:43.366696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:07.089 [2024-11-06 09:06:43.366733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:07.089 pt2 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.089 [2024-11-06 09:06:43.373260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:07.089 [2024-11-06 09:06:43.373464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.089 [2024-11-06 09:06:43.373632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:07.089 [2024-11-06 09:06:43.373847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.089 [2024-11-06 09:06:43.374509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.089 [2024-11-06 09:06:43.374753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:07.089 [2024-11-06 09:06:43.375071] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:07.089 [2024-11-06 09:06:43.375258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:07.089 [2024-11-06 09:06:43.375586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:07.089 [2024-11-06 09:06:43.375722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:07.089 [2024-11-06 09:06:43.376232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:07.089 [2024-11-06 09:06:43.376626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:07.089 pt3 00:14:07.089 [2024-11-06 09:06:43.376755] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:07.089 [2024-11-06 09:06:43.377031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.089 "name": "raid_bdev1", 00:14:07.089 "uuid": "9be4f4c9-d4be-4c4c-be48-484a11a383dd", 00:14:07.089 "strip_size_kb": 64, 00:14:07.089 "state": "online", 00:14:07.089 "raid_level": "raid0", 00:14:07.089 "superblock": true, 00:14:07.089 "num_base_bdevs": 3, 00:14:07.089 "num_base_bdevs_discovered": 3, 00:14:07.089 "num_base_bdevs_operational": 3, 00:14:07.089 "base_bdevs_list": [ 00:14:07.089 { 00:14:07.089 "name": "pt1", 00:14:07.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:07.089 "is_configured": true, 00:14:07.089 "data_offset": 2048, 00:14:07.089 "data_size": 63488 00:14:07.089 }, 00:14:07.089 { 00:14:07.089 "name": "pt2", 00:14:07.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.089 "is_configured": true, 00:14:07.089 "data_offset": 2048, 00:14:07.089 "data_size": 63488 00:14:07.089 }, 00:14:07.089 { 00:14:07.089 "name": "pt3", 00:14:07.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.089 "is_configured": true, 00:14:07.089 "data_offset": 2048, 00:14:07.089 "data_size": 63488 00:14:07.089 } 00:14:07.089 ] 00:14:07.089 }' 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.089 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:07.657 [2024-11-06 09:06:43.897887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:07.657 "name": "raid_bdev1", 00:14:07.657 "aliases": [ 00:14:07.657 "9be4f4c9-d4be-4c4c-be48-484a11a383dd" 00:14:07.657 ], 00:14:07.657 "product_name": "Raid Volume", 00:14:07.657 "block_size": 512, 00:14:07.657 "num_blocks": 190464, 00:14:07.657 "uuid": "9be4f4c9-d4be-4c4c-be48-484a11a383dd", 00:14:07.657 "assigned_rate_limits": { 00:14:07.657 "rw_ios_per_sec": 0, 00:14:07.657 "rw_mbytes_per_sec": 0, 00:14:07.657 "r_mbytes_per_sec": 0, 00:14:07.657 "w_mbytes_per_sec": 0 00:14:07.657 }, 00:14:07.657 "claimed": false, 00:14:07.657 "zoned": false, 00:14:07.657 "supported_io_types": { 00:14:07.657 "read": true, 00:14:07.657 "write": true, 00:14:07.657 "unmap": true, 00:14:07.657 "flush": true, 00:14:07.657 "reset": true, 00:14:07.657 "nvme_admin": false, 00:14:07.657 "nvme_io": false, 00:14:07.657 "nvme_io_md": false, 00:14:07.657 "write_zeroes": true, 00:14:07.657 "zcopy": false, 00:14:07.657 "get_zone_info": false, 00:14:07.657 "zone_management": false, 00:14:07.657 "zone_append": false, 00:14:07.657 "compare": false, 00:14:07.657 "compare_and_write": false, 00:14:07.657 "abort": false, 00:14:07.657 "seek_hole": false, 00:14:07.657 "seek_data": false, 00:14:07.657 "copy": false, 00:14:07.657 "nvme_iov_md": false 00:14:07.657 }, 00:14:07.657 "memory_domains": [ 00:14:07.657 { 00:14:07.657 "dma_device_id": "system", 00:14:07.657 "dma_device_type": 1 00:14:07.657 }, 00:14:07.657 { 00:14:07.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.657 "dma_device_type": 2 00:14:07.657 }, 00:14:07.657 { 00:14:07.657 "dma_device_id": "system", 00:14:07.657 "dma_device_type": 1 00:14:07.657 }, 00:14:07.657 { 00:14:07.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.657 "dma_device_type": 2 00:14:07.657 }, 00:14:07.657 { 00:14:07.657 "dma_device_id": "system", 00:14:07.657 "dma_device_type": 1 00:14:07.657 }, 00:14:07.657 { 00:14:07.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:07.657 "dma_device_type": 2 00:14:07.657 } 00:14:07.657 ], 00:14:07.657 "driver_specific": { 00:14:07.657 "raid": { 00:14:07.657 "uuid": "9be4f4c9-d4be-4c4c-be48-484a11a383dd", 00:14:07.657 "strip_size_kb": 64, 00:14:07.657 "state": "online", 00:14:07.657 "raid_level": "raid0", 00:14:07.657 "superblock": true, 00:14:07.657 "num_base_bdevs": 3, 00:14:07.657 "num_base_bdevs_discovered": 3, 00:14:07.657 "num_base_bdevs_operational": 3, 00:14:07.657 "base_bdevs_list": [ 00:14:07.657 { 00:14:07.657 "name": "pt1", 00:14:07.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:07.657 "is_configured": true, 00:14:07.657 "data_offset": 2048, 00:14:07.657 "data_size": 63488 00:14:07.657 }, 00:14:07.657 { 00:14:07.657 "name": "pt2", 00:14:07.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:07.657 "is_configured": true, 00:14:07.657 "data_offset": 2048, 00:14:07.657 "data_size": 63488 00:14:07.657 }, 00:14:07.657 { 00:14:07.657 "name": "pt3", 00:14:07.657 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:07.657 "is_configured": true, 00:14:07.657 "data_offset": 2048, 00:14:07.657 "data_size": 63488 00:14:07.657 } 00:14:07.657 ] 00:14:07.657 } 00:14:07.657 } 00:14:07.657 }' 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:07.657 pt2 00:14:07.657 pt3' 00:14:07.657 09:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:07.657 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.916 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.916 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.916 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:07.916 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.916 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.917 [2024-11-06 09:06:44.225862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9be4f4c9-d4be-4c4c-be48-484a11a383dd '!=' 9be4f4c9-d4be-4c4c-be48-484a11a383dd ']' 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65001 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 65001 ']' 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 65001 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65001 00:14:07.917 killing process with pid 65001 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65001' 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 65001 00:14:07.917 [2024-11-06 09:06:44.303687] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.917 09:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 65001 00:14:07.917 [2024-11-06 09:06:44.303794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.917 [2024-11-06 09:06:44.303899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.917 [2024-11-06 09:06:44.303921] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:08.175 [2024-11-06 09:06:44.575121] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.110 09:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:09.110 00:14:09.110 real 0m5.741s 00:14:09.110 user 0m8.748s 00:14:09.110 sys 0m0.812s 00:14:09.110 09:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:09.110 ************************************ 00:14:09.110 END TEST raid_superblock_test 00:14:09.110 ************************************ 00:14:09.110 09:06:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.110 09:06:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:14:09.110 09:06:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:09.110 09:06:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:09.110 09:06:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.110 ************************************ 00:14:09.110 START TEST raid_read_error_test 00:14:09.110 ************************************ 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QAcco1np99 00:14:09.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65265 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65265 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 65265 ']' 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:09.110 09:06:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.111 09:06:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:09.111 09:06:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.369 [2024-11-06 09:06:45.740032] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:14:09.369 [2024-11-06 09:06:45.740228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65265 ] 00:14:09.627 [2024-11-06 09:06:45.925192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.627 [2024-11-06 09:06:46.061423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.886 [2024-11-06 09:06:46.264895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.886 [2024-11-06 09:06:46.264967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.453 BaseBdev1_malloc 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.453 true 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.453 [2024-11-06 09:06:46.813781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:10.453 [2024-11-06 09:06:46.814015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.453 [2024-11-06 09:06:46.814055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:10.453 [2024-11-06 09:06:46.814075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.453 [2024-11-06 09:06:46.816948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.453 [2024-11-06 09:06:46.816999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:10.453 BaseBdev1 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.453 BaseBdev2_malloc 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.453 true 00:14:10.453 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.454 [2024-11-06 09:06:46.877861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:10.454 [2024-11-06 09:06:46.877942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.454 [2024-11-06 09:06:46.877970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:10.454 [2024-11-06 09:06:46.877992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.454 [2024-11-06 09:06:46.881023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.454 [2024-11-06 09:06:46.881100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:10.454 BaseBdev2 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.454 BaseBdev3_malloc 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.454 true 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.454 [2024-11-06 09:06:46.944365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:10.454 [2024-11-06 09:06:46.944448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.454 [2024-11-06 09:06:46.944485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:10.454 [2024-11-06 09:06:46.944505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.454 [2024-11-06 09:06:46.947525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.454 BaseBdev3 00:14:10.454 [2024-11-06 09:06:46.947768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.454 [2024-11-06 09:06:46.952495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.454 [2024-11-06 09:06:46.955077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.454 [2024-11-06 09:06:46.955326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.454 [2024-11-06 09:06:46.955600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:10.454 [2024-11-06 09:06:46.955621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:10.454 [2024-11-06 09:06:46.955983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:10.454 [2024-11-06 09:06:46.956266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:10.454 [2024-11-06 09:06:46.956296] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:10.454 [2024-11-06 09:06:46.956553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.454 09:06:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.713 09:06:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.713 "name": "raid_bdev1", 00:14:10.713 "uuid": "3b49c9a2-f51c-47a3-8c6d-c55e4c52c671", 00:14:10.713 "strip_size_kb": 64, 00:14:10.713 "state": "online", 00:14:10.713 "raid_level": "raid0", 00:14:10.713 "superblock": true, 00:14:10.713 "num_base_bdevs": 3, 00:14:10.713 "num_base_bdevs_discovered": 3, 00:14:10.713 "num_base_bdevs_operational": 3, 00:14:10.713 "base_bdevs_list": [ 00:14:10.713 { 00:14:10.713 "name": "BaseBdev1", 00:14:10.713 "uuid": "29ca0cb5-36e7-5ca0-9fed-7c2b89c5b5e3", 00:14:10.713 "is_configured": true, 00:14:10.713 "data_offset": 2048, 00:14:10.713 "data_size": 63488 00:14:10.713 }, 00:14:10.713 { 00:14:10.713 "name": "BaseBdev2", 00:14:10.713 "uuid": "57607982-9065-524e-ba7a-199b1138b0d3", 00:14:10.713 "is_configured": true, 00:14:10.713 "data_offset": 2048, 00:14:10.713 "data_size": 63488 00:14:10.713 }, 00:14:10.713 { 00:14:10.713 "name": "BaseBdev3", 00:14:10.713 "uuid": "e3e4eec1-267f-5885-a75b-ed0166f59383", 00:14:10.713 "is_configured": true, 00:14:10.713 "data_offset": 2048, 00:14:10.713 "data_size": 63488 00:14:10.713 } 00:14:10.713 ] 00:14:10.713 }' 00:14:10.713 09:06:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.713 09:06:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.971 09:06:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:10.971 09:06:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:11.231 [2024-11-06 09:06:47.626068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.216 "name": "raid_bdev1", 00:14:12.216 "uuid": "3b49c9a2-f51c-47a3-8c6d-c55e4c52c671", 00:14:12.216 "strip_size_kb": 64, 00:14:12.216 "state": "online", 00:14:12.216 "raid_level": "raid0", 00:14:12.216 "superblock": true, 00:14:12.216 "num_base_bdevs": 3, 00:14:12.216 "num_base_bdevs_discovered": 3, 00:14:12.216 "num_base_bdevs_operational": 3, 00:14:12.216 "base_bdevs_list": [ 00:14:12.216 { 00:14:12.216 "name": "BaseBdev1", 00:14:12.216 "uuid": "29ca0cb5-36e7-5ca0-9fed-7c2b89c5b5e3", 00:14:12.216 "is_configured": true, 00:14:12.216 "data_offset": 2048, 00:14:12.216 "data_size": 63488 00:14:12.216 }, 00:14:12.216 { 00:14:12.216 "name": "BaseBdev2", 00:14:12.216 "uuid": "57607982-9065-524e-ba7a-199b1138b0d3", 00:14:12.216 "is_configured": true, 00:14:12.216 "data_offset": 2048, 00:14:12.216 "data_size": 63488 00:14:12.216 }, 00:14:12.216 { 00:14:12.216 "name": "BaseBdev3", 00:14:12.216 "uuid": "e3e4eec1-267f-5885-a75b-ed0166f59383", 00:14:12.216 "is_configured": true, 00:14:12.216 "data_offset": 2048, 00:14:12.216 "data_size": 63488 00:14:12.216 } 00:14:12.216 ] 00:14:12.216 }' 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.216 09:06:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.783 [2024-11-06 09:06:49.032663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:12.783 [2024-11-06 09:06:49.032697] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.783 { 00:14:12.783 "results": [ 00:14:12.783 { 00:14:12.783 "job": "raid_bdev1", 00:14:12.783 "core_mask": "0x1", 00:14:12.783 "workload": "randrw", 00:14:12.783 "percentage": 50, 00:14:12.783 "status": "finished", 00:14:12.783 "queue_depth": 1, 00:14:12.783 "io_size": 131072, 00:14:12.783 "runtime": 1.40386, 00:14:12.783 "iops": 9991.737067798784, 00:14:12.783 "mibps": 1248.967133474848, 00:14:12.783 "io_failed": 1, 00:14:12.783 "io_timeout": 0, 00:14:12.783 "avg_latency_us": 139.75749695414365, 00:14:12.783 "min_latency_us": 37.93454545454546, 00:14:12.783 "max_latency_us": 2025.658181818182 00:14:12.783 } 00:14:12.783 ], 00:14:12.783 "core_count": 1 00:14:12.783 } 00:14:12.783 [2024-11-06 09:06:49.036307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.783 [2024-11-06 09:06:49.036362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.783 [2024-11-06 09:06:49.036415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:12.783 [2024-11-06 09:06:49.036428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65265 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 65265 ']' 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 65265 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65265 00:14:12.783 killing process with pid 65265 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65265' 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 65265 00:14:12.783 [2024-11-06 09:06:49.070058] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.783 09:06:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 65265 00:14:12.783 [2024-11-06 09:06:49.292610] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.159 09:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QAcco1np99 00:14:14.159 09:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:14.159 09:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:14.159 09:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:14.160 09:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:14.160 09:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:14.160 09:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:14.160 09:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:14.160 00:14:14.160 real 0m4.838s 00:14:14.160 user 0m6.019s 00:14:14.160 sys 0m0.611s 00:14:14.160 09:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:14.160 ************************************ 00:14:14.160 END TEST raid_read_error_test 00:14:14.160 ************************************ 00:14:14.160 09:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.160 09:06:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:14:14.160 09:06:50 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:14.160 09:06:50 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:14.160 09:06:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:14.160 ************************************ 00:14:14.160 START TEST raid_write_error_test 00:14:14.160 ************************************ 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5bT2z8EIE5 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65405 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65405 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 65405 ']' 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:14.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:14.160 09:06:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.160 [2024-11-06 09:06:50.640227] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:14:14.160 [2024-11-06 09:06:50.640868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65405 ] 00:14:14.419 [2024-11-06 09:06:50.830625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.679 [2024-11-06 09:06:50.969791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.679 [2024-11-06 09:06:51.192350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:14.679 [2024-11-06 09:06:51.192403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 BaseBdev1_malloc 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 true 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.296 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.296 [2024-11-06 09:06:51.727703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:15.297 [2024-11-06 09:06:51.727789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.297 [2024-11-06 09:06:51.727833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:15.297 [2024-11-06 09:06:51.727855] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.297 [2024-11-06 09:06:51.730890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.297 [2024-11-06 09:06:51.730939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:15.297 BaseBdev1 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.297 BaseBdev2_malloc 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.297 true 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.297 [2024-11-06 09:06:51.786851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:15.297 [2024-11-06 09:06:51.787080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.297 [2024-11-06 09:06:51.787236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:15.297 [2024-11-06 09:06:51.787286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.297 [2024-11-06 09:06:51.790354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.297 BaseBdev2 00:14:15.297 [2024-11-06 09:06:51.790526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.297 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.555 BaseBdev3_malloc 00:14:15.555 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.555 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:15.555 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.555 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.555 true 00:14:15.555 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.555 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:15.555 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.555 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.555 [2024-11-06 09:06:51.859082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:15.555 [2024-11-06 09:06:51.859366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.555 [2024-11-06 09:06:51.859450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:15.556 [2024-11-06 09:06:51.859675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.556 [2024-11-06 09:06:51.863045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.556 [2024-11-06 09:06:51.863154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:15.556 BaseBdev3 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.556 [2024-11-06 09:06:51.867423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.556 [2024-11-06 09:06:51.870183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.556 [2024-11-06 09:06:51.870437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:15.556 [2024-11-06 09:06:51.870764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:15.556 [2024-11-06 09:06:51.870806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:15.556 [2024-11-06 09:06:51.871251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:15.556 [2024-11-06 09:06:51.871527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:15.556 [2024-11-06 09:06:51.871555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:15.556 [2024-11-06 09:06:51.871799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.556 "name": "raid_bdev1", 00:14:15.556 "uuid": "be62cddb-6ed5-4b80-a040-c1b4af511127", 00:14:15.556 "strip_size_kb": 64, 00:14:15.556 "state": "online", 00:14:15.556 "raid_level": "raid0", 00:14:15.556 "superblock": true, 00:14:15.556 "num_base_bdevs": 3, 00:14:15.556 "num_base_bdevs_discovered": 3, 00:14:15.556 "num_base_bdevs_operational": 3, 00:14:15.556 "base_bdevs_list": [ 00:14:15.556 { 00:14:15.556 "name": "BaseBdev1", 00:14:15.556 "uuid": "ec1f8bc1-28d9-5088-8f84-38577d3012d9", 00:14:15.556 "is_configured": true, 00:14:15.556 "data_offset": 2048, 00:14:15.556 "data_size": 63488 00:14:15.556 }, 00:14:15.556 { 00:14:15.556 "name": "BaseBdev2", 00:14:15.556 "uuid": "8cfcf5df-03d3-5808-a268-cbac3778debc", 00:14:15.556 "is_configured": true, 00:14:15.556 "data_offset": 2048, 00:14:15.556 "data_size": 63488 00:14:15.556 }, 00:14:15.556 { 00:14:15.556 "name": "BaseBdev3", 00:14:15.556 "uuid": "5cd728c5-a662-5df1-bf9d-633f45d1c932", 00:14:15.556 "is_configured": true, 00:14:15.556 "data_offset": 2048, 00:14:15.556 "data_size": 63488 00:14:15.556 } 00:14:15.556 ] 00:14:15.556 }' 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.556 09:06:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.123 09:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:16.123 09:06:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:16.123 [2024-11-06 09:06:52.521404] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:17.060 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:17.060 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.060 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.060 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.060 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.061 "name": "raid_bdev1", 00:14:17.061 "uuid": "be62cddb-6ed5-4b80-a040-c1b4af511127", 00:14:17.061 "strip_size_kb": 64, 00:14:17.061 "state": "online", 00:14:17.061 "raid_level": "raid0", 00:14:17.061 "superblock": true, 00:14:17.061 "num_base_bdevs": 3, 00:14:17.061 "num_base_bdevs_discovered": 3, 00:14:17.061 "num_base_bdevs_operational": 3, 00:14:17.061 "base_bdevs_list": [ 00:14:17.061 { 00:14:17.061 "name": "BaseBdev1", 00:14:17.061 "uuid": "ec1f8bc1-28d9-5088-8f84-38577d3012d9", 00:14:17.061 "is_configured": true, 00:14:17.061 "data_offset": 2048, 00:14:17.061 "data_size": 63488 00:14:17.061 }, 00:14:17.061 { 00:14:17.061 "name": "BaseBdev2", 00:14:17.061 "uuid": "8cfcf5df-03d3-5808-a268-cbac3778debc", 00:14:17.061 "is_configured": true, 00:14:17.061 "data_offset": 2048, 00:14:17.061 "data_size": 63488 00:14:17.061 }, 00:14:17.061 { 00:14:17.061 "name": "BaseBdev3", 00:14:17.061 "uuid": "5cd728c5-a662-5df1-bf9d-633f45d1c932", 00:14:17.061 "is_configured": true, 00:14:17.061 "data_offset": 2048, 00:14:17.061 "data_size": 63488 00:14:17.061 } 00:14:17.061 ] 00:14:17.061 }' 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.061 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.646 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:17.646 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.646 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.646 [2024-11-06 09:06:53.978529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:17.646 [2024-11-06 09:06:53.978564] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:17.646 { 00:14:17.646 "results": [ 00:14:17.646 { 00:14:17.646 "job": "raid_bdev1", 00:14:17.646 "core_mask": "0x1", 00:14:17.646 "workload": "randrw", 00:14:17.646 "percentage": 50, 00:14:17.646 "status": "finished", 00:14:17.646 "queue_depth": 1, 00:14:17.646 "io_size": 131072, 00:14:17.646 "runtime": 1.454257, 00:14:17.646 "iops": 10399.812412799114, 00:14:17.646 "mibps": 1299.9765515998893, 00:14:17.646 "io_failed": 1, 00:14:17.646 "io_timeout": 0, 00:14:17.646 "avg_latency_us": 133.707572892562, 00:14:17.646 "min_latency_us": 36.77090909090909, 00:14:17.646 "max_latency_us": 2010.7636363636364 00:14:17.646 } 00:14:17.646 ], 00:14:17.646 "core_count": 1 00:14:17.646 } 00:14:17.646 [2024-11-06 09:06:53.981973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.646 [2024-11-06 09:06:53.982030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.646 [2024-11-06 09:06:53.982082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.646 [2024-11-06 09:06:53.982097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:17.646 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.646 09:06:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65405 00:14:17.646 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 65405 ']' 00:14:17.646 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 65405 00:14:17.646 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:14:17.646 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:17.646 09:06:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65405 00:14:17.646 killing process with pid 65405 00:14:17.646 09:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:17.646 09:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:17.646 09:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65405' 00:14:17.646 09:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 65405 00:14:17.646 [2024-11-06 09:06:54.017707] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.646 09:06:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 65405 00:14:17.906 [2024-11-06 09:06:54.227785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.842 09:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5bT2z8EIE5 00:14:18.842 09:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:18.842 09:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:18.842 09:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:14:18.842 09:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:18.842 09:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:18.842 09:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:18.842 09:06:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:14:18.842 00:14:18.842 real 0m4.850s 00:14:18.842 user 0m6.039s 00:14:18.842 sys 0m0.609s 00:14:18.842 ************************************ 00:14:18.842 END TEST raid_write_error_test 00:14:18.842 ************************************ 00:14:18.842 09:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:18.842 09:06:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.101 09:06:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:19.101 09:06:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:14:19.101 09:06:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:19.101 09:06:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:19.101 09:06:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.101 ************************************ 00:14:19.101 START TEST raid_state_function_test 00:14:19.101 ************************************ 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:19.101 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65555 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:19.102 Process raid pid: 65555 00:14:19.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65555' 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65555 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 65555 ']' 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:19.102 09:06:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.102 [2024-11-06 09:06:55.549049] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:14:19.102 [2024-11-06 09:06:55.549243] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.361 [2024-11-06 09:06:55.740935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.361 [2024-11-06 09:06:55.873779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.619 [2024-11-06 09:06:56.094467] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.619 [2024-11-06 09:06:56.094518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.186 [2024-11-06 09:06:56.600904] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:20.186 [2024-11-06 09:06:56.601114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:20.186 [2024-11-06 09:06:56.601243] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.186 [2024-11-06 09:06:56.601306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.186 [2024-11-06 09:06:56.601323] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:20.186 [2024-11-06 09:06:56.601340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.186 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.186 "name": "Existed_Raid", 00:14:20.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.186 "strip_size_kb": 64, 00:14:20.186 "state": "configuring", 00:14:20.186 "raid_level": "concat", 00:14:20.186 "superblock": false, 00:14:20.186 "num_base_bdevs": 3, 00:14:20.186 "num_base_bdevs_discovered": 0, 00:14:20.186 "num_base_bdevs_operational": 3, 00:14:20.186 "base_bdevs_list": [ 00:14:20.186 { 00:14:20.186 "name": "BaseBdev1", 00:14:20.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.186 "is_configured": false, 00:14:20.186 "data_offset": 0, 00:14:20.186 "data_size": 0 00:14:20.186 }, 00:14:20.186 { 00:14:20.186 "name": "BaseBdev2", 00:14:20.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.186 "is_configured": false, 00:14:20.187 "data_offset": 0, 00:14:20.187 "data_size": 0 00:14:20.187 }, 00:14:20.187 { 00:14:20.187 "name": "BaseBdev3", 00:14:20.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.187 "is_configured": false, 00:14:20.187 "data_offset": 0, 00:14:20.187 "data_size": 0 00:14:20.187 } 00:14:20.187 ] 00:14:20.187 }' 00:14:20.187 09:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.187 09:06:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.758 [2024-11-06 09:06:57.192993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:20.758 [2024-11-06 09:06:57.193037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.758 [2024-11-06 09:06:57.200974] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:20.758 [2024-11-06 09:06:57.201159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:20.758 [2024-11-06 09:06:57.201289] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.758 [2024-11-06 09:06:57.201350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.758 [2024-11-06 09:06:57.201505] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:20.758 [2024-11-06 09:06:57.201577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.758 [2024-11-06 09:06:57.245495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.758 BaseBdev1 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.758 [ 00:14:20.758 { 00:14:20.758 "name": "BaseBdev1", 00:14:20.758 "aliases": [ 00:14:20.758 "9bda0b26-c86b-4209-8b22-d21d29bda230" 00:14:20.758 ], 00:14:20.758 "product_name": "Malloc disk", 00:14:20.758 "block_size": 512, 00:14:20.758 "num_blocks": 65536, 00:14:20.758 "uuid": "9bda0b26-c86b-4209-8b22-d21d29bda230", 00:14:20.758 "assigned_rate_limits": { 00:14:20.758 "rw_ios_per_sec": 0, 00:14:20.758 "rw_mbytes_per_sec": 0, 00:14:20.758 "r_mbytes_per_sec": 0, 00:14:20.758 "w_mbytes_per_sec": 0 00:14:20.758 }, 00:14:20.758 "claimed": true, 00:14:20.758 "claim_type": "exclusive_write", 00:14:20.758 "zoned": false, 00:14:20.758 "supported_io_types": { 00:14:20.758 "read": true, 00:14:20.758 "write": true, 00:14:20.758 "unmap": true, 00:14:20.758 "flush": true, 00:14:20.758 "reset": true, 00:14:20.758 "nvme_admin": false, 00:14:20.758 "nvme_io": false, 00:14:20.758 "nvme_io_md": false, 00:14:20.758 "write_zeroes": true, 00:14:20.758 "zcopy": true, 00:14:20.758 "get_zone_info": false, 00:14:20.758 "zone_management": false, 00:14:20.758 "zone_append": false, 00:14:20.758 "compare": false, 00:14:20.758 "compare_and_write": false, 00:14:20.758 "abort": true, 00:14:20.758 "seek_hole": false, 00:14:20.758 "seek_data": false, 00:14:20.758 "copy": true, 00:14:20.758 "nvme_iov_md": false 00:14:20.758 }, 00:14:20.758 "memory_domains": [ 00:14:20.758 { 00:14:20.758 "dma_device_id": "system", 00:14:20.758 "dma_device_type": 1 00:14:20.758 }, 00:14:20.758 { 00:14:20.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.758 "dma_device_type": 2 00:14:20.758 } 00:14:20.758 ], 00:14:20.758 "driver_specific": {} 00:14:20.758 } 00:14:20.758 ] 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.758 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.016 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.016 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.016 "name": "Existed_Raid", 00:14:21.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.016 "strip_size_kb": 64, 00:14:21.016 "state": "configuring", 00:14:21.016 "raid_level": "concat", 00:14:21.016 "superblock": false, 00:14:21.016 "num_base_bdevs": 3, 00:14:21.016 "num_base_bdevs_discovered": 1, 00:14:21.016 "num_base_bdevs_operational": 3, 00:14:21.016 "base_bdevs_list": [ 00:14:21.016 { 00:14:21.016 "name": "BaseBdev1", 00:14:21.016 "uuid": "9bda0b26-c86b-4209-8b22-d21d29bda230", 00:14:21.016 "is_configured": true, 00:14:21.016 "data_offset": 0, 00:14:21.016 "data_size": 65536 00:14:21.016 }, 00:14:21.016 { 00:14:21.016 "name": "BaseBdev2", 00:14:21.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.016 "is_configured": false, 00:14:21.016 "data_offset": 0, 00:14:21.016 "data_size": 0 00:14:21.016 }, 00:14:21.016 { 00:14:21.016 "name": "BaseBdev3", 00:14:21.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.016 "is_configured": false, 00:14:21.016 "data_offset": 0, 00:14:21.016 "data_size": 0 00:14:21.016 } 00:14:21.016 ] 00:14:21.016 }' 00:14:21.016 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.016 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.274 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:21.274 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.274 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.532 [2024-11-06 09:06:57.809692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:21.532 [2024-11-06 09:06:57.809759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.532 [2024-11-06 09:06:57.817750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.532 [2024-11-06 09:06:57.820279] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.532 [2024-11-06 09:06:57.820333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.532 [2024-11-06 09:06:57.820350] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:21.532 [2024-11-06 09:06:57.820366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.532 "name": "Existed_Raid", 00:14:21.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.532 "strip_size_kb": 64, 00:14:21.532 "state": "configuring", 00:14:21.532 "raid_level": "concat", 00:14:21.532 "superblock": false, 00:14:21.532 "num_base_bdevs": 3, 00:14:21.532 "num_base_bdevs_discovered": 1, 00:14:21.532 "num_base_bdevs_operational": 3, 00:14:21.532 "base_bdevs_list": [ 00:14:21.532 { 00:14:21.532 "name": "BaseBdev1", 00:14:21.532 "uuid": "9bda0b26-c86b-4209-8b22-d21d29bda230", 00:14:21.532 "is_configured": true, 00:14:21.532 "data_offset": 0, 00:14:21.532 "data_size": 65536 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "name": "BaseBdev2", 00:14:21.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.532 "is_configured": false, 00:14:21.532 "data_offset": 0, 00:14:21.532 "data_size": 0 00:14:21.532 }, 00:14:21.532 { 00:14:21.532 "name": "BaseBdev3", 00:14:21.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.532 "is_configured": false, 00:14:21.532 "data_offset": 0, 00:14:21.532 "data_size": 0 00:14:21.532 } 00:14:21.532 ] 00:14:21.532 }' 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.532 09:06:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.099 [2024-11-06 09:06:58.377273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.099 BaseBdev2 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.099 [ 00:14:22.099 { 00:14:22.099 "name": "BaseBdev2", 00:14:22.099 "aliases": [ 00:14:22.099 "a5d60207-8cf3-476a-acea-d195b454b679" 00:14:22.099 ], 00:14:22.099 "product_name": "Malloc disk", 00:14:22.099 "block_size": 512, 00:14:22.099 "num_blocks": 65536, 00:14:22.099 "uuid": "a5d60207-8cf3-476a-acea-d195b454b679", 00:14:22.099 "assigned_rate_limits": { 00:14:22.099 "rw_ios_per_sec": 0, 00:14:22.099 "rw_mbytes_per_sec": 0, 00:14:22.099 "r_mbytes_per_sec": 0, 00:14:22.099 "w_mbytes_per_sec": 0 00:14:22.099 }, 00:14:22.099 "claimed": true, 00:14:22.099 "claim_type": "exclusive_write", 00:14:22.099 "zoned": false, 00:14:22.099 "supported_io_types": { 00:14:22.099 "read": true, 00:14:22.099 "write": true, 00:14:22.099 "unmap": true, 00:14:22.099 "flush": true, 00:14:22.099 "reset": true, 00:14:22.099 "nvme_admin": false, 00:14:22.099 "nvme_io": false, 00:14:22.099 "nvme_io_md": false, 00:14:22.099 "write_zeroes": true, 00:14:22.099 "zcopy": true, 00:14:22.099 "get_zone_info": false, 00:14:22.099 "zone_management": false, 00:14:22.099 "zone_append": false, 00:14:22.099 "compare": false, 00:14:22.099 "compare_and_write": false, 00:14:22.099 "abort": true, 00:14:22.099 "seek_hole": false, 00:14:22.099 "seek_data": false, 00:14:22.099 "copy": true, 00:14:22.099 "nvme_iov_md": false 00:14:22.099 }, 00:14:22.099 "memory_domains": [ 00:14:22.099 { 00:14:22.099 "dma_device_id": "system", 00:14:22.099 "dma_device_type": 1 00:14:22.099 }, 00:14:22.099 { 00:14:22.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.099 "dma_device_type": 2 00:14:22.099 } 00:14:22.099 ], 00:14:22.099 "driver_specific": {} 00:14:22.099 } 00:14:22.099 ] 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:22.099 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.100 "name": "Existed_Raid", 00:14:22.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.100 "strip_size_kb": 64, 00:14:22.100 "state": "configuring", 00:14:22.100 "raid_level": "concat", 00:14:22.100 "superblock": false, 00:14:22.100 "num_base_bdevs": 3, 00:14:22.100 "num_base_bdevs_discovered": 2, 00:14:22.100 "num_base_bdevs_operational": 3, 00:14:22.100 "base_bdevs_list": [ 00:14:22.100 { 00:14:22.100 "name": "BaseBdev1", 00:14:22.100 "uuid": "9bda0b26-c86b-4209-8b22-d21d29bda230", 00:14:22.100 "is_configured": true, 00:14:22.100 "data_offset": 0, 00:14:22.100 "data_size": 65536 00:14:22.100 }, 00:14:22.100 { 00:14:22.100 "name": "BaseBdev2", 00:14:22.100 "uuid": "a5d60207-8cf3-476a-acea-d195b454b679", 00:14:22.100 "is_configured": true, 00:14:22.100 "data_offset": 0, 00:14:22.100 "data_size": 65536 00:14:22.100 }, 00:14:22.100 { 00:14:22.100 "name": "BaseBdev3", 00:14:22.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.100 "is_configured": false, 00:14:22.100 "data_offset": 0, 00:14:22.100 "data_size": 0 00:14:22.100 } 00:14:22.100 ] 00:14:22.100 }' 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.100 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.666 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:22.666 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.666 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.666 [2024-11-06 09:06:58.988222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.666 [2024-11-06 09:06:58.988291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:22.666 [2024-11-06 09:06:58.988315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:22.666 [2024-11-06 09:06:58.988729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:22.667 [2024-11-06 09:06:58.989022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:22.667 [2024-11-06 09:06:58.989054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:22.667 [2024-11-06 09:06:58.989448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.667 BaseBdev3 00:14:22.667 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.667 09:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:22.667 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:22.667 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:22.667 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:22.667 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:22.667 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:22.667 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:22.667 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.667 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.667 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.667 09:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.667 [ 00:14:22.667 { 00:14:22.667 "name": "BaseBdev3", 00:14:22.667 "aliases": [ 00:14:22.667 "6b9994cf-2ab5-4095-a356-330475e968ad" 00:14:22.667 ], 00:14:22.667 "product_name": "Malloc disk", 00:14:22.667 "block_size": 512, 00:14:22.667 "num_blocks": 65536, 00:14:22.667 "uuid": "6b9994cf-2ab5-4095-a356-330475e968ad", 00:14:22.667 "assigned_rate_limits": { 00:14:22.667 "rw_ios_per_sec": 0, 00:14:22.667 "rw_mbytes_per_sec": 0, 00:14:22.667 "r_mbytes_per_sec": 0, 00:14:22.667 "w_mbytes_per_sec": 0 00:14:22.667 }, 00:14:22.667 "claimed": true, 00:14:22.667 "claim_type": "exclusive_write", 00:14:22.667 "zoned": false, 00:14:22.667 "supported_io_types": { 00:14:22.667 "read": true, 00:14:22.667 "write": true, 00:14:22.667 "unmap": true, 00:14:22.667 "flush": true, 00:14:22.667 "reset": true, 00:14:22.667 "nvme_admin": false, 00:14:22.667 "nvme_io": false, 00:14:22.667 "nvme_io_md": false, 00:14:22.667 "write_zeroes": true, 00:14:22.667 "zcopy": true, 00:14:22.667 "get_zone_info": false, 00:14:22.667 "zone_management": false, 00:14:22.667 "zone_append": false, 00:14:22.667 "compare": false, 00:14:22.667 "compare_and_write": false, 00:14:22.667 "abort": true, 00:14:22.667 "seek_hole": false, 00:14:22.667 "seek_data": false, 00:14:22.667 "copy": true, 00:14:22.667 "nvme_iov_md": false 00:14:22.667 }, 00:14:22.667 "memory_domains": [ 00:14:22.667 { 00:14:22.667 "dma_device_id": "system", 00:14:22.667 "dma_device_type": 1 00:14:22.667 }, 00:14:22.667 { 00:14:22.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.667 "dma_device_type": 2 00:14:22.667 } 00:14:22.667 ], 00:14:22.667 "driver_specific": {} 00:14:22.667 } 00:14:22.667 ] 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.667 "name": "Existed_Raid", 00:14:22.667 "uuid": "74dd9022-e337-4d50-b0ab-0a5f85842aa9", 00:14:22.667 "strip_size_kb": 64, 00:14:22.667 "state": "online", 00:14:22.667 "raid_level": "concat", 00:14:22.667 "superblock": false, 00:14:22.667 "num_base_bdevs": 3, 00:14:22.667 "num_base_bdevs_discovered": 3, 00:14:22.667 "num_base_bdevs_operational": 3, 00:14:22.667 "base_bdevs_list": [ 00:14:22.667 { 00:14:22.667 "name": "BaseBdev1", 00:14:22.667 "uuid": "9bda0b26-c86b-4209-8b22-d21d29bda230", 00:14:22.667 "is_configured": true, 00:14:22.667 "data_offset": 0, 00:14:22.667 "data_size": 65536 00:14:22.667 }, 00:14:22.667 { 00:14:22.667 "name": "BaseBdev2", 00:14:22.667 "uuid": "a5d60207-8cf3-476a-acea-d195b454b679", 00:14:22.667 "is_configured": true, 00:14:22.667 "data_offset": 0, 00:14:22.667 "data_size": 65536 00:14:22.667 }, 00:14:22.667 { 00:14:22.667 "name": "BaseBdev3", 00:14:22.667 "uuid": "6b9994cf-2ab5-4095-a356-330475e968ad", 00:14:22.667 "is_configured": true, 00:14:22.667 "data_offset": 0, 00:14:22.667 "data_size": 65536 00:14:22.667 } 00:14:22.667 ] 00:14:22.667 }' 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.667 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.234 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:23.234 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:23.234 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:23.234 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:23.234 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:23.234 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:23.234 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:23.234 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.234 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:23.234 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.234 [2024-11-06 09:06:59.576942] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.234 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.234 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:23.234 "name": "Existed_Raid", 00:14:23.234 "aliases": [ 00:14:23.234 "74dd9022-e337-4d50-b0ab-0a5f85842aa9" 00:14:23.234 ], 00:14:23.234 "product_name": "Raid Volume", 00:14:23.234 "block_size": 512, 00:14:23.234 "num_blocks": 196608, 00:14:23.234 "uuid": "74dd9022-e337-4d50-b0ab-0a5f85842aa9", 00:14:23.234 "assigned_rate_limits": { 00:14:23.234 "rw_ios_per_sec": 0, 00:14:23.234 "rw_mbytes_per_sec": 0, 00:14:23.234 "r_mbytes_per_sec": 0, 00:14:23.234 "w_mbytes_per_sec": 0 00:14:23.234 }, 00:14:23.234 "claimed": false, 00:14:23.234 "zoned": false, 00:14:23.234 "supported_io_types": { 00:14:23.234 "read": true, 00:14:23.234 "write": true, 00:14:23.234 "unmap": true, 00:14:23.234 "flush": true, 00:14:23.234 "reset": true, 00:14:23.234 "nvme_admin": false, 00:14:23.234 "nvme_io": false, 00:14:23.234 "nvme_io_md": false, 00:14:23.234 "write_zeroes": true, 00:14:23.234 "zcopy": false, 00:14:23.234 "get_zone_info": false, 00:14:23.234 "zone_management": false, 00:14:23.234 "zone_append": false, 00:14:23.234 "compare": false, 00:14:23.234 "compare_and_write": false, 00:14:23.234 "abort": false, 00:14:23.234 "seek_hole": false, 00:14:23.234 "seek_data": false, 00:14:23.235 "copy": false, 00:14:23.235 "nvme_iov_md": false 00:14:23.235 }, 00:14:23.235 "memory_domains": [ 00:14:23.235 { 00:14:23.235 "dma_device_id": "system", 00:14:23.235 "dma_device_type": 1 00:14:23.235 }, 00:14:23.235 { 00:14:23.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.235 "dma_device_type": 2 00:14:23.235 }, 00:14:23.235 { 00:14:23.235 "dma_device_id": "system", 00:14:23.235 "dma_device_type": 1 00:14:23.235 }, 00:14:23.235 { 00:14:23.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.235 "dma_device_type": 2 00:14:23.235 }, 00:14:23.235 { 00:14:23.235 "dma_device_id": "system", 00:14:23.235 "dma_device_type": 1 00:14:23.235 }, 00:14:23.235 { 00:14:23.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.235 "dma_device_type": 2 00:14:23.235 } 00:14:23.235 ], 00:14:23.235 "driver_specific": { 00:14:23.235 "raid": { 00:14:23.235 "uuid": "74dd9022-e337-4d50-b0ab-0a5f85842aa9", 00:14:23.235 "strip_size_kb": 64, 00:14:23.235 "state": "online", 00:14:23.235 "raid_level": "concat", 00:14:23.235 "superblock": false, 00:14:23.235 "num_base_bdevs": 3, 00:14:23.235 "num_base_bdevs_discovered": 3, 00:14:23.235 "num_base_bdevs_operational": 3, 00:14:23.235 "base_bdevs_list": [ 00:14:23.235 { 00:14:23.235 "name": "BaseBdev1", 00:14:23.235 "uuid": "9bda0b26-c86b-4209-8b22-d21d29bda230", 00:14:23.235 "is_configured": true, 00:14:23.235 "data_offset": 0, 00:14:23.235 "data_size": 65536 00:14:23.235 }, 00:14:23.235 { 00:14:23.235 "name": "BaseBdev2", 00:14:23.235 "uuid": "a5d60207-8cf3-476a-acea-d195b454b679", 00:14:23.235 "is_configured": true, 00:14:23.235 "data_offset": 0, 00:14:23.235 "data_size": 65536 00:14:23.235 }, 00:14:23.235 { 00:14:23.235 "name": "BaseBdev3", 00:14:23.235 "uuid": "6b9994cf-2ab5-4095-a356-330475e968ad", 00:14:23.235 "is_configured": true, 00:14:23.235 "data_offset": 0, 00:14:23.235 "data_size": 65536 00:14:23.235 } 00:14:23.235 ] 00:14:23.235 } 00:14:23.235 } 00:14:23.235 }' 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:23.235 BaseBdev2 00:14:23.235 BaseBdev3' 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.235 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.493 [2024-11-06 09:06:59.864672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:23.493 [2024-11-06 09:06:59.864709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.493 [2024-11-06 09:06:59.864778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.493 09:06:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.493 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.493 "name": "Existed_Raid", 00:14:23.493 "uuid": "74dd9022-e337-4d50-b0ab-0a5f85842aa9", 00:14:23.493 "strip_size_kb": 64, 00:14:23.493 "state": "offline", 00:14:23.493 "raid_level": "concat", 00:14:23.493 "superblock": false, 00:14:23.493 "num_base_bdevs": 3, 00:14:23.493 "num_base_bdevs_discovered": 2, 00:14:23.493 "num_base_bdevs_operational": 2, 00:14:23.493 "base_bdevs_list": [ 00:14:23.493 { 00:14:23.493 "name": null, 00:14:23.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.493 "is_configured": false, 00:14:23.493 "data_offset": 0, 00:14:23.493 "data_size": 65536 00:14:23.493 }, 00:14:23.493 { 00:14:23.493 "name": "BaseBdev2", 00:14:23.493 "uuid": "a5d60207-8cf3-476a-acea-d195b454b679", 00:14:23.493 "is_configured": true, 00:14:23.493 "data_offset": 0, 00:14:23.493 "data_size": 65536 00:14:23.493 }, 00:14:23.493 { 00:14:23.493 "name": "BaseBdev3", 00:14:23.493 "uuid": "6b9994cf-2ab5-4095-a356-330475e968ad", 00:14:23.493 "is_configured": true, 00:14:23.493 "data_offset": 0, 00:14:23.493 "data_size": 65536 00:14:23.493 } 00:14:23.493 ] 00:14:23.493 }' 00:14:23.493 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.493 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.060 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:24.060 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:24.060 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:24.060 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.060 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.060 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.060 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.060 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:24.060 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:24.060 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:24.060 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.060 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.060 [2024-11-06 09:07:00.542945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.319 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.319 [2024-11-06 09:07:00.696081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.320 [2024-11-06 09:07:00.696167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.320 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.578 BaseBdev2 00:14:24.578 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.578 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:24.578 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:24.578 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:24.578 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:24.578 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:24.578 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.579 [ 00:14:24.579 { 00:14:24.579 "name": "BaseBdev2", 00:14:24.579 "aliases": [ 00:14:24.579 "1f99f3e3-e2db-457b-80ff-4e128e17f4bb" 00:14:24.579 ], 00:14:24.579 "product_name": "Malloc disk", 00:14:24.579 "block_size": 512, 00:14:24.579 "num_blocks": 65536, 00:14:24.579 "uuid": "1f99f3e3-e2db-457b-80ff-4e128e17f4bb", 00:14:24.579 "assigned_rate_limits": { 00:14:24.579 "rw_ios_per_sec": 0, 00:14:24.579 "rw_mbytes_per_sec": 0, 00:14:24.579 "r_mbytes_per_sec": 0, 00:14:24.579 "w_mbytes_per_sec": 0 00:14:24.579 }, 00:14:24.579 "claimed": false, 00:14:24.579 "zoned": false, 00:14:24.579 "supported_io_types": { 00:14:24.579 "read": true, 00:14:24.579 "write": true, 00:14:24.579 "unmap": true, 00:14:24.579 "flush": true, 00:14:24.579 "reset": true, 00:14:24.579 "nvme_admin": false, 00:14:24.579 "nvme_io": false, 00:14:24.579 "nvme_io_md": false, 00:14:24.579 "write_zeroes": true, 00:14:24.579 "zcopy": true, 00:14:24.579 "get_zone_info": false, 00:14:24.579 "zone_management": false, 00:14:24.579 "zone_append": false, 00:14:24.579 "compare": false, 00:14:24.579 "compare_and_write": false, 00:14:24.579 "abort": true, 00:14:24.579 "seek_hole": false, 00:14:24.579 "seek_data": false, 00:14:24.579 "copy": true, 00:14:24.579 "nvme_iov_md": false 00:14:24.579 }, 00:14:24.579 "memory_domains": [ 00:14:24.579 { 00:14:24.579 "dma_device_id": "system", 00:14:24.579 "dma_device_type": 1 00:14:24.579 }, 00:14:24.579 { 00:14:24.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.579 "dma_device_type": 2 00:14:24.579 } 00:14:24.579 ], 00:14:24.579 "driver_specific": {} 00:14:24.579 } 00:14:24.579 ] 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.579 BaseBdev3 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.579 [ 00:14:24.579 { 00:14:24.579 "name": "BaseBdev3", 00:14:24.579 "aliases": [ 00:14:24.579 "2f7f3e9e-f828-445d-aa43-e93562710a26" 00:14:24.579 ], 00:14:24.579 "product_name": "Malloc disk", 00:14:24.579 "block_size": 512, 00:14:24.579 "num_blocks": 65536, 00:14:24.579 "uuid": "2f7f3e9e-f828-445d-aa43-e93562710a26", 00:14:24.579 "assigned_rate_limits": { 00:14:24.579 "rw_ios_per_sec": 0, 00:14:24.579 "rw_mbytes_per_sec": 0, 00:14:24.579 "r_mbytes_per_sec": 0, 00:14:24.579 "w_mbytes_per_sec": 0 00:14:24.579 }, 00:14:24.579 "claimed": false, 00:14:24.579 "zoned": false, 00:14:24.579 "supported_io_types": { 00:14:24.579 "read": true, 00:14:24.579 "write": true, 00:14:24.579 "unmap": true, 00:14:24.579 "flush": true, 00:14:24.579 "reset": true, 00:14:24.579 "nvme_admin": false, 00:14:24.579 "nvme_io": false, 00:14:24.579 "nvme_io_md": false, 00:14:24.579 "write_zeroes": true, 00:14:24.579 "zcopy": true, 00:14:24.579 "get_zone_info": false, 00:14:24.579 "zone_management": false, 00:14:24.579 "zone_append": false, 00:14:24.579 "compare": false, 00:14:24.579 "compare_and_write": false, 00:14:24.579 "abort": true, 00:14:24.579 "seek_hole": false, 00:14:24.579 "seek_data": false, 00:14:24.579 "copy": true, 00:14:24.579 "nvme_iov_md": false 00:14:24.579 }, 00:14:24.579 "memory_domains": [ 00:14:24.579 { 00:14:24.579 "dma_device_id": "system", 00:14:24.579 "dma_device_type": 1 00:14:24.579 }, 00:14:24.579 { 00:14:24.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.579 "dma_device_type": 2 00:14:24.579 } 00:14:24.579 ], 00:14:24.579 "driver_specific": {} 00:14:24.579 } 00:14:24.579 ] 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.579 [2024-11-06 09:07:00.993057] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:24.579 [2024-11-06 09:07:00.993112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:24.579 [2024-11-06 09:07:00.993159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.579 [2024-11-06 09:07:00.995649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.579 09:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.579 09:07:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.579 09:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.579 09:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.579 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.579 "name": "Existed_Raid", 00:14:24.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.579 "strip_size_kb": 64, 00:14:24.579 "state": "configuring", 00:14:24.580 "raid_level": "concat", 00:14:24.580 "superblock": false, 00:14:24.580 "num_base_bdevs": 3, 00:14:24.580 "num_base_bdevs_discovered": 2, 00:14:24.580 "num_base_bdevs_operational": 3, 00:14:24.580 "base_bdevs_list": [ 00:14:24.580 { 00:14:24.580 "name": "BaseBdev1", 00:14:24.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.580 "is_configured": false, 00:14:24.580 "data_offset": 0, 00:14:24.580 "data_size": 0 00:14:24.580 }, 00:14:24.580 { 00:14:24.580 "name": "BaseBdev2", 00:14:24.580 "uuid": "1f99f3e3-e2db-457b-80ff-4e128e17f4bb", 00:14:24.580 "is_configured": true, 00:14:24.580 "data_offset": 0, 00:14:24.580 "data_size": 65536 00:14:24.580 }, 00:14:24.580 { 00:14:24.580 "name": "BaseBdev3", 00:14:24.580 "uuid": "2f7f3e9e-f828-445d-aa43-e93562710a26", 00:14:24.580 "is_configured": true, 00:14:24.580 "data_offset": 0, 00:14:24.580 "data_size": 65536 00:14:24.580 } 00:14:24.580 ] 00:14:24.580 }' 00:14:24.580 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.580 09:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.147 [2024-11-06 09:07:01.525288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.147 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.147 "name": "Existed_Raid", 00:14:25.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.147 "strip_size_kb": 64, 00:14:25.148 "state": "configuring", 00:14:25.148 "raid_level": "concat", 00:14:25.148 "superblock": false, 00:14:25.148 "num_base_bdevs": 3, 00:14:25.148 "num_base_bdevs_discovered": 1, 00:14:25.148 "num_base_bdevs_operational": 3, 00:14:25.148 "base_bdevs_list": [ 00:14:25.148 { 00:14:25.148 "name": "BaseBdev1", 00:14:25.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.148 "is_configured": false, 00:14:25.148 "data_offset": 0, 00:14:25.148 "data_size": 0 00:14:25.148 }, 00:14:25.148 { 00:14:25.148 "name": null, 00:14:25.148 "uuid": "1f99f3e3-e2db-457b-80ff-4e128e17f4bb", 00:14:25.148 "is_configured": false, 00:14:25.148 "data_offset": 0, 00:14:25.148 "data_size": 65536 00:14:25.148 }, 00:14:25.148 { 00:14:25.148 "name": "BaseBdev3", 00:14:25.148 "uuid": "2f7f3e9e-f828-445d-aa43-e93562710a26", 00:14:25.148 "is_configured": true, 00:14:25.148 "data_offset": 0, 00:14:25.148 "data_size": 65536 00:14:25.148 } 00:14:25.148 ] 00:14:25.148 }' 00:14:25.148 09:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.148 09:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.714 [2024-11-06 09:07:02.148103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.714 BaseBdev1 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.714 [ 00:14:25.714 { 00:14:25.714 "name": "BaseBdev1", 00:14:25.714 "aliases": [ 00:14:25.714 "9d95a00a-f6da-4ec0-a0ae-7ae9d7477c14" 00:14:25.714 ], 00:14:25.714 "product_name": "Malloc disk", 00:14:25.714 "block_size": 512, 00:14:25.714 "num_blocks": 65536, 00:14:25.714 "uuid": "9d95a00a-f6da-4ec0-a0ae-7ae9d7477c14", 00:14:25.714 "assigned_rate_limits": { 00:14:25.714 "rw_ios_per_sec": 0, 00:14:25.714 "rw_mbytes_per_sec": 0, 00:14:25.714 "r_mbytes_per_sec": 0, 00:14:25.714 "w_mbytes_per_sec": 0 00:14:25.714 }, 00:14:25.714 "claimed": true, 00:14:25.714 "claim_type": "exclusive_write", 00:14:25.714 "zoned": false, 00:14:25.714 "supported_io_types": { 00:14:25.714 "read": true, 00:14:25.714 "write": true, 00:14:25.714 "unmap": true, 00:14:25.714 "flush": true, 00:14:25.714 "reset": true, 00:14:25.714 "nvme_admin": false, 00:14:25.714 "nvme_io": false, 00:14:25.714 "nvme_io_md": false, 00:14:25.714 "write_zeroes": true, 00:14:25.714 "zcopy": true, 00:14:25.714 "get_zone_info": false, 00:14:25.714 "zone_management": false, 00:14:25.714 "zone_append": false, 00:14:25.714 "compare": false, 00:14:25.714 "compare_and_write": false, 00:14:25.714 "abort": true, 00:14:25.714 "seek_hole": false, 00:14:25.714 "seek_data": false, 00:14:25.714 "copy": true, 00:14:25.714 "nvme_iov_md": false 00:14:25.714 }, 00:14:25.714 "memory_domains": [ 00:14:25.714 { 00:14:25.714 "dma_device_id": "system", 00:14:25.714 "dma_device_type": 1 00:14:25.714 }, 00:14:25.714 { 00:14:25.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.714 "dma_device_type": 2 00:14:25.714 } 00:14:25.714 ], 00:14:25.714 "driver_specific": {} 00:14:25.714 } 00:14:25.714 ] 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.714 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.715 "name": "Existed_Raid", 00:14:25.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.715 "strip_size_kb": 64, 00:14:25.715 "state": "configuring", 00:14:25.715 "raid_level": "concat", 00:14:25.715 "superblock": false, 00:14:25.715 "num_base_bdevs": 3, 00:14:25.715 "num_base_bdevs_discovered": 2, 00:14:25.715 "num_base_bdevs_operational": 3, 00:14:25.715 "base_bdevs_list": [ 00:14:25.715 { 00:14:25.715 "name": "BaseBdev1", 00:14:25.715 "uuid": "9d95a00a-f6da-4ec0-a0ae-7ae9d7477c14", 00:14:25.715 "is_configured": true, 00:14:25.715 "data_offset": 0, 00:14:25.715 "data_size": 65536 00:14:25.715 }, 00:14:25.715 { 00:14:25.715 "name": null, 00:14:25.715 "uuid": "1f99f3e3-e2db-457b-80ff-4e128e17f4bb", 00:14:25.715 "is_configured": false, 00:14:25.715 "data_offset": 0, 00:14:25.715 "data_size": 65536 00:14:25.715 }, 00:14:25.715 { 00:14:25.715 "name": "BaseBdev3", 00:14:25.715 "uuid": "2f7f3e9e-f828-445d-aa43-e93562710a26", 00:14:25.715 "is_configured": true, 00:14:25.715 "data_offset": 0, 00:14:25.715 "data_size": 65536 00:14:25.715 } 00:14:25.715 ] 00:14:25.715 }' 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.715 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.283 [2024-11-06 09:07:02.772363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.283 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.540 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.540 "name": "Existed_Raid", 00:14:26.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.541 "strip_size_kb": 64, 00:14:26.541 "state": "configuring", 00:14:26.541 "raid_level": "concat", 00:14:26.541 "superblock": false, 00:14:26.541 "num_base_bdevs": 3, 00:14:26.541 "num_base_bdevs_discovered": 1, 00:14:26.541 "num_base_bdevs_operational": 3, 00:14:26.541 "base_bdevs_list": [ 00:14:26.541 { 00:14:26.541 "name": "BaseBdev1", 00:14:26.541 "uuid": "9d95a00a-f6da-4ec0-a0ae-7ae9d7477c14", 00:14:26.541 "is_configured": true, 00:14:26.541 "data_offset": 0, 00:14:26.541 "data_size": 65536 00:14:26.541 }, 00:14:26.541 { 00:14:26.541 "name": null, 00:14:26.541 "uuid": "1f99f3e3-e2db-457b-80ff-4e128e17f4bb", 00:14:26.541 "is_configured": false, 00:14:26.541 "data_offset": 0, 00:14:26.541 "data_size": 65536 00:14:26.541 }, 00:14:26.541 { 00:14:26.541 "name": null, 00:14:26.541 "uuid": "2f7f3e9e-f828-445d-aa43-e93562710a26", 00:14:26.541 "is_configured": false, 00:14:26.541 "data_offset": 0, 00:14:26.541 "data_size": 65536 00:14:26.541 } 00:14:26.541 ] 00:14:26.541 }' 00:14:26.541 09:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.541 09:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.799 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.799 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:26.799 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.799 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 [2024-11-06 09:07:03.388629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.058 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.058 "name": "Existed_Raid", 00:14:27.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.058 "strip_size_kb": 64, 00:14:27.058 "state": "configuring", 00:14:27.058 "raid_level": "concat", 00:14:27.058 "superblock": false, 00:14:27.058 "num_base_bdevs": 3, 00:14:27.058 "num_base_bdevs_discovered": 2, 00:14:27.058 "num_base_bdevs_operational": 3, 00:14:27.058 "base_bdevs_list": [ 00:14:27.058 { 00:14:27.058 "name": "BaseBdev1", 00:14:27.058 "uuid": "9d95a00a-f6da-4ec0-a0ae-7ae9d7477c14", 00:14:27.058 "is_configured": true, 00:14:27.058 "data_offset": 0, 00:14:27.058 "data_size": 65536 00:14:27.058 }, 00:14:27.058 { 00:14:27.058 "name": null, 00:14:27.058 "uuid": "1f99f3e3-e2db-457b-80ff-4e128e17f4bb", 00:14:27.058 "is_configured": false, 00:14:27.058 "data_offset": 0, 00:14:27.059 "data_size": 65536 00:14:27.059 }, 00:14:27.059 { 00:14:27.059 "name": "BaseBdev3", 00:14:27.059 "uuid": "2f7f3e9e-f828-445d-aa43-e93562710a26", 00:14:27.059 "is_configured": true, 00:14:27.059 "data_offset": 0, 00:14:27.059 "data_size": 65536 00:14:27.059 } 00:14:27.059 ] 00:14:27.059 }' 00:14:27.059 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.059 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.625 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.625 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.626 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:27.626 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.626 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.626 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:27.626 09:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:27.626 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.626 09:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.626 [2024-11-06 09:07:03.972819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.626 "name": "Existed_Raid", 00:14:27.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.626 "strip_size_kb": 64, 00:14:27.626 "state": "configuring", 00:14:27.626 "raid_level": "concat", 00:14:27.626 "superblock": false, 00:14:27.626 "num_base_bdevs": 3, 00:14:27.626 "num_base_bdevs_discovered": 1, 00:14:27.626 "num_base_bdevs_operational": 3, 00:14:27.626 "base_bdevs_list": [ 00:14:27.626 { 00:14:27.626 "name": null, 00:14:27.626 "uuid": "9d95a00a-f6da-4ec0-a0ae-7ae9d7477c14", 00:14:27.626 "is_configured": false, 00:14:27.626 "data_offset": 0, 00:14:27.626 "data_size": 65536 00:14:27.626 }, 00:14:27.626 { 00:14:27.626 "name": null, 00:14:27.626 "uuid": "1f99f3e3-e2db-457b-80ff-4e128e17f4bb", 00:14:27.626 "is_configured": false, 00:14:27.626 "data_offset": 0, 00:14:27.626 "data_size": 65536 00:14:27.626 }, 00:14:27.626 { 00:14:27.626 "name": "BaseBdev3", 00:14:27.626 "uuid": "2f7f3e9e-f828-445d-aa43-e93562710a26", 00:14:27.626 "is_configured": true, 00:14:27.626 "data_offset": 0, 00:14:27.626 "data_size": 65536 00:14:27.626 } 00:14:27.626 ] 00:14:27.626 }' 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.626 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.193 [2024-11-06 09:07:04.630116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.193 "name": "Existed_Raid", 00:14:28.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.193 "strip_size_kb": 64, 00:14:28.193 "state": "configuring", 00:14:28.193 "raid_level": "concat", 00:14:28.193 "superblock": false, 00:14:28.193 "num_base_bdevs": 3, 00:14:28.193 "num_base_bdevs_discovered": 2, 00:14:28.193 "num_base_bdevs_operational": 3, 00:14:28.193 "base_bdevs_list": [ 00:14:28.193 { 00:14:28.193 "name": null, 00:14:28.193 "uuid": "9d95a00a-f6da-4ec0-a0ae-7ae9d7477c14", 00:14:28.193 "is_configured": false, 00:14:28.193 "data_offset": 0, 00:14:28.193 "data_size": 65536 00:14:28.193 }, 00:14:28.193 { 00:14:28.193 "name": "BaseBdev2", 00:14:28.193 "uuid": "1f99f3e3-e2db-457b-80ff-4e128e17f4bb", 00:14:28.193 "is_configured": true, 00:14:28.193 "data_offset": 0, 00:14:28.193 "data_size": 65536 00:14:28.193 }, 00:14:28.193 { 00:14:28.193 "name": "BaseBdev3", 00:14:28.193 "uuid": "2f7f3e9e-f828-445d-aa43-e93562710a26", 00:14:28.193 "is_configured": true, 00:14:28.193 "data_offset": 0, 00:14:28.193 "data_size": 65536 00:14:28.193 } 00:14:28.193 ] 00:14:28.193 }' 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.193 09:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9d95a00a-f6da-4ec0-a0ae-7ae9d7477c14 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.791 [2024-11-06 09:07:05.296790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:28.791 [2024-11-06 09:07:05.297045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:28.791 [2024-11-06 09:07:05.297075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:28.791 [2024-11-06 09:07:05.297403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:28.791 [2024-11-06 09:07:05.297594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:28.791 [2024-11-06 09:07:05.297611] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:28.791 [2024-11-06 09:07:05.298031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.791 NewBaseBdev 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.791 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.791 [ 00:14:29.050 { 00:14:29.050 "name": "NewBaseBdev", 00:14:29.050 "aliases": [ 00:14:29.050 "9d95a00a-f6da-4ec0-a0ae-7ae9d7477c14" 00:14:29.050 ], 00:14:29.050 "product_name": "Malloc disk", 00:14:29.050 "block_size": 512, 00:14:29.050 "num_blocks": 65536, 00:14:29.050 "uuid": "9d95a00a-f6da-4ec0-a0ae-7ae9d7477c14", 00:14:29.050 "assigned_rate_limits": { 00:14:29.050 "rw_ios_per_sec": 0, 00:14:29.050 "rw_mbytes_per_sec": 0, 00:14:29.050 "r_mbytes_per_sec": 0, 00:14:29.050 "w_mbytes_per_sec": 0 00:14:29.050 }, 00:14:29.050 "claimed": true, 00:14:29.050 "claim_type": "exclusive_write", 00:14:29.050 "zoned": false, 00:14:29.050 "supported_io_types": { 00:14:29.050 "read": true, 00:14:29.050 "write": true, 00:14:29.050 "unmap": true, 00:14:29.050 "flush": true, 00:14:29.050 "reset": true, 00:14:29.050 "nvme_admin": false, 00:14:29.050 "nvme_io": false, 00:14:29.050 "nvme_io_md": false, 00:14:29.050 "write_zeroes": true, 00:14:29.050 "zcopy": true, 00:14:29.050 "get_zone_info": false, 00:14:29.050 "zone_management": false, 00:14:29.050 "zone_append": false, 00:14:29.050 "compare": false, 00:14:29.050 "compare_and_write": false, 00:14:29.050 "abort": true, 00:14:29.050 "seek_hole": false, 00:14:29.050 "seek_data": false, 00:14:29.050 "copy": true, 00:14:29.050 "nvme_iov_md": false 00:14:29.050 }, 00:14:29.050 "memory_domains": [ 00:14:29.050 { 00:14:29.050 "dma_device_id": "system", 00:14:29.050 "dma_device_type": 1 00:14:29.050 }, 00:14:29.050 { 00:14:29.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.050 "dma_device_type": 2 00:14:29.050 } 00:14:29.050 ], 00:14:29.050 "driver_specific": {} 00:14:29.050 } 00:14:29.050 ] 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.050 "name": "Existed_Raid", 00:14:29.050 "uuid": "194d716e-dc0b-4e31-8df8-e5eb54e937f7", 00:14:29.050 "strip_size_kb": 64, 00:14:29.050 "state": "online", 00:14:29.050 "raid_level": "concat", 00:14:29.050 "superblock": false, 00:14:29.050 "num_base_bdevs": 3, 00:14:29.050 "num_base_bdevs_discovered": 3, 00:14:29.050 "num_base_bdevs_operational": 3, 00:14:29.050 "base_bdevs_list": [ 00:14:29.050 { 00:14:29.050 "name": "NewBaseBdev", 00:14:29.050 "uuid": "9d95a00a-f6da-4ec0-a0ae-7ae9d7477c14", 00:14:29.050 "is_configured": true, 00:14:29.050 "data_offset": 0, 00:14:29.050 "data_size": 65536 00:14:29.050 }, 00:14:29.050 { 00:14:29.050 "name": "BaseBdev2", 00:14:29.050 "uuid": "1f99f3e3-e2db-457b-80ff-4e128e17f4bb", 00:14:29.050 "is_configured": true, 00:14:29.050 "data_offset": 0, 00:14:29.050 "data_size": 65536 00:14:29.050 }, 00:14:29.050 { 00:14:29.050 "name": "BaseBdev3", 00:14:29.050 "uuid": "2f7f3e9e-f828-445d-aa43-e93562710a26", 00:14:29.050 "is_configured": true, 00:14:29.050 "data_offset": 0, 00:14:29.050 "data_size": 65536 00:14:29.050 } 00:14:29.050 ] 00:14:29.050 }' 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.050 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.620 [2024-11-06 09:07:05.869488] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.620 "name": "Existed_Raid", 00:14:29.620 "aliases": [ 00:14:29.620 "194d716e-dc0b-4e31-8df8-e5eb54e937f7" 00:14:29.620 ], 00:14:29.620 "product_name": "Raid Volume", 00:14:29.620 "block_size": 512, 00:14:29.620 "num_blocks": 196608, 00:14:29.620 "uuid": "194d716e-dc0b-4e31-8df8-e5eb54e937f7", 00:14:29.620 "assigned_rate_limits": { 00:14:29.620 "rw_ios_per_sec": 0, 00:14:29.620 "rw_mbytes_per_sec": 0, 00:14:29.620 "r_mbytes_per_sec": 0, 00:14:29.620 "w_mbytes_per_sec": 0 00:14:29.620 }, 00:14:29.620 "claimed": false, 00:14:29.620 "zoned": false, 00:14:29.620 "supported_io_types": { 00:14:29.620 "read": true, 00:14:29.620 "write": true, 00:14:29.620 "unmap": true, 00:14:29.620 "flush": true, 00:14:29.620 "reset": true, 00:14:29.620 "nvme_admin": false, 00:14:29.620 "nvme_io": false, 00:14:29.620 "nvme_io_md": false, 00:14:29.620 "write_zeroes": true, 00:14:29.620 "zcopy": false, 00:14:29.620 "get_zone_info": false, 00:14:29.620 "zone_management": false, 00:14:29.620 "zone_append": false, 00:14:29.620 "compare": false, 00:14:29.620 "compare_and_write": false, 00:14:29.620 "abort": false, 00:14:29.620 "seek_hole": false, 00:14:29.620 "seek_data": false, 00:14:29.620 "copy": false, 00:14:29.620 "nvme_iov_md": false 00:14:29.620 }, 00:14:29.620 "memory_domains": [ 00:14:29.620 { 00:14:29.620 "dma_device_id": "system", 00:14:29.620 "dma_device_type": 1 00:14:29.620 }, 00:14:29.620 { 00:14:29.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.620 "dma_device_type": 2 00:14:29.620 }, 00:14:29.620 { 00:14:29.620 "dma_device_id": "system", 00:14:29.620 "dma_device_type": 1 00:14:29.620 }, 00:14:29.620 { 00:14:29.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.620 "dma_device_type": 2 00:14:29.620 }, 00:14:29.620 { 00:14:29.620 "dma_device_id": "system", 00:14:29.620 "dma_device_type": 1 00:14:29.620 }, 00:14:29.620 { 00:14:29.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.620 "dma_device_type": 2 00:14:29.620 } 00:14:29.620 ], 00:14:29.620 "driver_specific": { 00:14:29.620 "raid": { 00:14:29.620 "uuid": "194d716e-dc0b-4e31-8df8-e5eb54e937f7", 00:14:29.620 "strip_size_kb": 64, 00:14:29.620 "state": "online", 00:14:29.620 "raid_level": "concat", 00:14:29.620 "superblock": false, 00:14:29.620 "num_base_bdevs": 3, 00:14:29.620 "num_base_bdevs_discovered": 3, 00:14:29.620 "num_base_bdevs_operational": 3, 00:14:29.620 "base_bdevs_list": [ 00:14:29.620 { 00:14:29.620 "name": "NewBaseBdev", 00:14:29.620 "uuid": "9d95a00a-f6da-4ec0-a0ae-7ae9d7477c14", 00:14:29.620 "is_configured": true, 00:14:29.620 "data_offset": 0, 00:14:29.620 "data_size": 65536 00:14:29.620 }, 00:14:29.620 { 00:14:29.620 "name": "BaseBdev2", 00:14:29.620 "uuid": "1f99f3e3-e2db-457b-80ff-4e128e17f4bb", 00:14:29.620 "is_configured": true, 00:14:29.620 "data_offset": 0, 00:14:29.620 "data_size": 65536 00:14:29.620 }, 00:14:29.620 { 00:14:29.620 "name": "BaseBdev3", 00:14:29.620 "uuid": "2f7f3e9e-f828-445d-aa43-e93562710a26", 00:14:29.620 "is_configured": true, 00:14:29.620 "data_offset": 0, 00:14:29.620 "data_size": 65536 00:14:29.620 } 00:14:29.620 ] 00:14:29.620 } 00:14:29.620 } 00:14:29.620 }' 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:29.620 BaseBdev2 00:14:29.620 BaseBdev3' 00:14:29.620 09:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.620 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.879 [2024-11-06 09:07:06.201210] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:29.879 [2024-11-06 09:07:06.201401] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.879 [2024-11-06 09:07:06.201631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.879 [2024-11-06 09:07:06.201803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.879 [2024-11-06 09:07:06.201953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65555 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 65555 ']' 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 65555 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65555 00:14:29.879 killing process with pid 65555 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65555' 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 65555 00:14:29.879 [2024-11-06 09:07:06.235175] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.879 09:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 65555 00:14:30.138 [2024-11-06 09:07:06.509392] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:31.074 00:14:31.074 real 0m12.125s 00:14:31.074 user 0m20.197s 00:14:31.074 sys 0m1.602s 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:31.074 ************************************ 00:14:31.074 END TEST raid_state_function_test 00:14:31.074 ************************************ 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.074 09:07:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:14:31.074 09:07:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:31.074 09:07:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:31.074 09:07:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.074 ************************************ 00:14:31.074 START TEST raid_state_function_test_sb 00:14:31.074 ************************************ 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:31.074 Process raid pid: 66195 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66195 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66195' 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66195 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 66195 ']' 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:31.074 09:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.333 09:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:31.333 09:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.333 [2024-11-06 09:07:07.714418] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:14:31.333 [2024-11-06 09:07:07.714876] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.592 [2024-11-06 09:07:07.896735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.592 [2024-11-06 09:07:08.022875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.851 [2024-11-06 09:07:08.225361] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.851 [2024-11-06 09:07:08.225664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.109 [2024-11-06 09:07:08.622296] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:32.109 [2024-11-06 09:07:08.622520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:32.109 [2024-11-06 09:07:08.622644] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.109 [2024-11-06 09:07:08.622705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.109 [2024-11-06 09:07:08.622835] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:32.109 [2024-11-06 09:07:08.622896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.109 09:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.368 09:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.368 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.368 "name": "Existed_Raid", 00:14:32.368 "uuid": "f9d32316-26f3-4e57-822c-9f46da035ab2", 00:14:32.368 "strip_size_kb": 64, 00:14:32.368 "state": "configuring", 00:14:32.368 "raid_level": "concat", 00:14:32.368 "superblock": true, 00:14:32.368 "num_base_bdevs": 3, 00:14:32.368 "num_base_bdevs_discovered": 0, 00:14:32.368 "num_base_bdevs_operational": 3, 00:14:32.368 "base_bdevs_list": [ 00:14:32.368 { 00:14:32.368 "name": "BaseBdev1", 00:14:32.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.368 "is_configured": false, 00:14:32.368 "data_offset": 0, 00:14:32.368 "data_size": 0 00:14:32.368 }, 00:14:32.368 { 00:14:32.368 "name": "BaseBdev2", 00:14:32.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.368 "is_configured": false, 00:14:32.368 "data_offset": 0, 00:14:32.368 "data_size": 0 00:14:32.368 }, 00:14:32.368 { 00:14:32.368 "name": "BaseBdev3", 00:14:32.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.368 "is_configured": false, 00:14:32.368 "data_offset": 0, 00:14:32.368 "data_size": 0 00:14:32.368 } 00:14:32.368 ] 00:14:32.368 }' 00:14:32.368 09:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.368 09:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.628 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:32.628 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.628 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.628 [2024-11-06 09:07:09.134458] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.628 [2024-11-06 09:07:09.134644] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:32.628 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.628 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:32.628 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.628 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.628 [2024-11-06 09:07:09.142439] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:32.628 [2024-11-06 09:07:09.142611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:32.628 [2024-11-06 09:07:09.142884] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.628 [2024-11-06 09:07:09.142947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.628 [2024-11-06 09:07:09.142987] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:32.628 [2024-11-06 09:07:09.143028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:32.628 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.628 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:32.628 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.628 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.887 [2024-11-06 09:07:09.185280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.887 BaseBdev1 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.887 [ 00:14:32.887 { 00:14:32.887 "name": "BaseBdev1", 00:14:32.887 "aliases": [ 00:14:32.887 "0c172d1b-654b-45a8-9acf-1b8bb28857cd" 00:14:32.887 ], 00:14:32.887 "product_name": "Malloc disk", 00:14:32.887 "block_size": 512, 00:14:32.887 "num_blocks": 65536, 00:14:32.887 "uuid": "0c172d1b-654b-45a8-9acf-1b8bb28857cd", 00:14:32.887 "assigned_rate_limits": { 00:14:32.887 "rw_ios_per_sec": 0, 00:14:32.887 "rw_mbytes_per_sec": 0, 00:14:32.887 "r_mbytes_per_sec": 0, 00:14:32.887 "w_mbytes_per_sec": 0 00:14:32.887 }, 00:14:32.887 "claimed": true, 00:14:32.887 "claim_type": "exclusive_write", 00:14:32.887 "zoned": false, 00:14:32.887 "supported_io_types": { 00:14:32.887 "read": true, 00:14:32.887 "write": true, 00:14:32.887 "unmap": true, 00:14:32.887 "flush": true, 00:14:32.887 "reset": true, 00:14:32.887 "nvme_admin": false, 00:14:32.887 "nvme_io": false, 00:14:32.887 "nvme_io_md": false, 00:14:32.887 "write_zeroes": true, 00:14:32.887 "zcopy": true, 00:14:32.887 "get_zone_info": false, 00:14:32.887 "zone_management": false, 00:14:32.887 "zone_append": false, 00:14:32.887 "compare": false, 00:14:32.887 "compare_and_write": false, 00:14:32.887 "abort": true, 00:14:32.887 "seek_hole": false, 00:14:32.887 "seek_data": false, 00:14:32.887 "copy": true, 00:14:32.887 "nvme_iov_md": false 00:14:32.887 }, 00:14:32.887 "memory_domains": [ 00:14:32.887 { 00:14:32.887 "dma_device_id": "system", 00:14:32.887 "dma_device_type": 1 00:14:32.887 }, 00:14:32.887 { 00:14:32.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.887 "dma_device_type": 2 00:14:32.887 } 00:14:32.887 ], 00:14:32.887 "driver_specific": {} 00:14:32.887 } 00:14:32.887 ] 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.887 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.887 "name": "Existed_Raid", 00:14:32.887 "uuid": "518ff80c-39aa-454d-ace0-6a078838a766", 00:14:32.888 "strip_size_kb": 64, 00:14:32.888 "state": "configuring", 00:14:32.888 "raid_level": "concat", 00:14:32.888 "superblock": true, 00:14:32.888 "num_base_bdevs": 3, 00:14:32.888 "num_base_bdevs_discovered": 1, 00:14:32.888 "num_base_bdevs_operational": 3, 00:14:32.888 "base_bdevs_list": [ 00:14:32.888 { 00:14:32.888 "name": "BaseBdev1", 00:14:32.888 "uuid": "0c172d1b-654b-45a8-9acf-1b8bb28857cd", 00:14:32.888 "is_configured": true, 00:14:32.888 "data_offset": 2048, 00:14:32.888 "data_size": 63488 00:14:32.888 }, 00:14:32.888 { 00:14:32.888 "name": "BaseBdev2", 00:14:32.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.888 "is_configured": false, 00:14:32.888 "data_offset": 0, 00:14:32.888 "data_size": 0 00:14:32.888 }, 00:14:32.888 { 00:14:32.888 "name": "BaseBdev3", 00:14:32.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.888 "is_configured": false, 00:14:32.888 "data_offset": 0, 00:14:32.888 "data_size": 0 00:14:32.888 } 00:14:32.888 ] 00:14:32.888 }' 00:14:32.888 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.888 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.455 [2024-11-06 09:07:09.773509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:33.455 [2024-11-06 09:07:09.773566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.455 [2024-11-06 09:07:09.781606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.455 [2024-11-06 09:07:09.784128] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:33.455 [2024-11-06 09:07:09.784363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:33.455 [2024-11-06 09:07:09.784485] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:33.455 [2024-11-06 09:07:09.784550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.455 "name": "Existed_Raid", 00:14:33.455 "uuid": "2be26415-d7a4-43ff-9bcb-049afe6b9c97", 00:14:33.455 "strip_size_kb": 64, 00:14:33.455 "state": "configuring", 00:14:33.455 "raid_level": "concat", 00:14:33.455 "superblock": true, 00:14:33.455 "num_base_bdevs": 3, 00:14:33.455 "num_base_bdevs_discovered": 1, 00:14:33.455 "num_base_bdevs_operational": 3, 00:14:33.455 "base_bdevs_list": [ 00:14:33.455 { 00:14:33.455 "name": "BaseBdev1", 00:14:33.455 "uuid": "0c172d1b-654b-45a8-9acf-1b8bb28857cd", 00:14:33.455 "is_configured": true, 00:14:33.455 "data_offset": 2048, 00:14:33.455 "data_size": 63488 00:14:33.455 }, 00:14:33.455 { 00:14:33.455 "name": "BaseBdev2", 00:14:33.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.455 "is_configured": false, 00:14:33.455 "data_offset": 0, 00:14:33.455 "data_size": 0 00:14:33.455 }, 00:14:33.455 { 00:14:33.455 "name": "BaseBdev3", 00:14:33.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.455 "is_configured": false, 00:14:33.455 "data_offset": 0, 00:14:33.455 "data_size": 0 00:14:33.455 } 00:14:33.455 ] 00:14:33.455 }' 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.455 09:07:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.023 [2024-11-06 09:07:10.346663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.023 BaseBdev2 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.023 [ 00:14:34.023 { 00:14:34.023 "name": "BaseBdev2", 00:14:34.023 "aliases": [ 00:14:34.023 "6c968b4a-cc42-4925-8a19-060b922b9583" 00:14:34.023 ], 00:14:34.023 "product_name": "Malloc disk", 00:14:34.023 "block_size": 512, 00:14:34.023 "num_blocks": 65536, 00:14:34.023 "uuid": "6c968b4a-cc42-4925-8a19-060b922b9583", 00:14:34.023 "assigned_rate_limits": { 00:14:34.023 "rw_ios_per_sec": 0, 00:14:34.023 "rw_mbytes_per_sec": 0, 00:14:34.023 "r_mbytes_per_sec": 0, 00:14:34.023 "w_mbytes_per_sec": 0 00:14:34.023 }, 00:14:34.023 "claimed": true, 00:14:34.023 "claim_type": "exclusive_write", 00:14:34.023 "zoned": false, 00:14:34.023 "supported_io_types": { 00:14:34.023 "read": true, 00:14:34.023 "write": true, 00:14:34.023 "unmap": true, 00:14:34.023 "flush": true, 00:14:34.023 "reset": true, 00:14:34.023 "nvme_admin": false, 00:14:34.023 "nvme_io": false, 00:14:34.023 "nvme_io_md": false, 00:14:34.023 "write_zeroes": true, 00:14:34.023 "zcopy": true, 00:14:34.023 "get_zone_info": false, 00:14:34.023 "zone_management": false, 00:14:34.023 "zone_append": false, 00:14:34.023 "compare": false, 00:14:34.023 "compare_and_write": false, 00:14:34.023 "abort": true, 00:14:34.023 "seek_hole": false, 00:14:34.023 "seek_data": false, 00:14:34.023 "copy": true, 00:14:34.023 "nvme_iov_md": false 00:14:34.023 }, 00:14:34.023 "memory_domains": [ 00:14:34.023 { 00:14:34.023 "dma_device_id": "system", 00:14:34.023 "dma_device_type": 1 00:14:34.023 }, 00:14:34.023 { 00:14:34.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.023 "dma_device_type": 2 00:14:34.023 } 00:14:34.023 ], 00:14:34.023 "driver_specific": {} 00:14:34.023 } 00:14:34.023 ] 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.023 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.024 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.024 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.024 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.024 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.024 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.024 "name": "Existed_Raid", 00:14:34.024 "uuid": "2be26415-d7a4-43ff-9bcb-049afe6b9c97", 00:14:34.024 "strip_size_kb": 64, 00:14:34.024 "state": "configuring", 00:14:34.024 "raid_level": "concat", 00:14:34.024 "superblock": true, 00:14:34.024 "num_base_bdevs": 3, 00:14:34.024 "num_base_bdevs_discovered": 2, 00:14:34.024 "num_base_bdevs_operational": 3, 00:14:34.024 "base_bdevs_list": [ 00:14:34.024 { 00:14:34.024 "name": "BaseBdev1", 00:14:34.024 "uuid": "0c172d1b-654b-45a8-9acf-1b8bb28857cd", 00:14:34.024 "is_configured": true, 00:14:34.024 "data_offset": 2048, 00:14:34.024 "data_size": 63488 00:14:34.024 }, 00:14:34.024 { 00:14:34.024 "name": "BaseBdev2", 00:14:34.024 "uuid": "6c968b4a-cc42-4925-8a19-060b922b9583", 00:14:34.024 "is_configured": true, 00:14:34.024 "data_offset": 2048, 00:14:34.024 "data_size": 63488 00:14:34.024 }, 00:14:34.024 { 00:14:34.024 "name": "BaseBdev3", 00:14:34.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.024 "is_configured": false, 00:14:34.024 "data_offset": 0, 00:14:34.024 "data_size": 0 00:14:34.024 } 00:14:34.024 ] 00:14:34.024 }' 00:14:34.024 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.024 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.592 BaseBdev3 00:14:34.592 [2024-11-06 09:07:10.925030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:34.592 [2024-11-06 09:07:10.925316] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:34.592 [2024-11-06 09:07:10.925344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:34.592 [2024-11-06 09:07:10.925627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:34.592 [2024-11-06 09:07:10.925799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:34.592 [2024-11-06 09:07:10.925814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:34.592 [2024-11-06 09:07:10.926023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.592 [ 00:14:34.592 { 00:14:34.592 "name": "BaseBdev3", 00:14:34.592 "aliases": [ 00:14:34.592 "82c07023-6156-420b-b2f4-8368b66f2fb9" 00:14:34.592 ], 00:14:34.592 "product_name": "Malloc disk", 00:14:34.592 "block_size": 512, 00:14:34.592 "num_blocks": 65536, 00:14:34.592 "uuid": "82c07023-6156-420b-b2f4-8368b66f2fb9", 00:14:34.592 "assigned_rate_limits": { 00:14:34.592 "rw_ios_per_sec": 0, 00:14:34.592 "rw_mbytes_per_sec": 0, 00:14:34.592 "r_mbytes_per_sec": 0, 00:14:34.592 "w_mbytes_per_sec": 0 00:14:34.592 }, 00:14:34.592 "claimed": true, 00:14:34.592 "claim_type": "exclusive_write", 00:14:34.592 "zoned": false, 00:14:34.592 "supported_io_types": { 00:14:34.592 "read": true, 00:14:34.592 "write": true, 00:14:34.592 "unmap": true, 00:14:34.592 "flush": true, 00:14:34.592 "reset": true, 00:14:34.592 "nvme_admin": false, 00:14:34.592 "nvme_io": false, 00:14:34.592 "nvme_io_md": false, 00:14:34.592 "write_zeroes": true, 00:14:34.592 "zcopy": true, 00:14:34.592 "get_zone_info": false, 00:14:34.592 "zone_management": false, 00:14:34.592 "zone_append": false, 00:14:34.592 "compare": false, 00:14:34.592 "compare_and_write": false, 00:14:34.592 "abort": true, 00:14:34.592 "seek_hole": false, 00:14:34.592 "seek_data": false, 00:14:34.592 "copy": true, 00:14:34.592 "nvme_iov_md": false 00:14:34.592 }, 00:14:34.592 "memory_domains": [ 00:14:34.592 { 00:14:34.592 "dma_device_id": "system", 00:14:34.592 "dma_device_type": 1 00:14:34.592 }, 00:14:34.592 { 00:14:34.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.592 "dma_device_type": 2 00:14:34.592 } 00:14:34.592 ], 00:14:34.592 "driver_specific": {} 00:14:34.592 } 00:14:34.592 ] 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.592 09:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.592 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.592 "name": "Existed_Raid", 00:14:34.592 "uuid": "2be26415-d7a4-43ff-9bcb-049afe6b9c97", 00:14:34.592 "strip_size_kb": 64, 00:14:34.592 "state": "online", 00:14:34.592 "raid_level": "concat", 00:14:34.592 "superblock": true, 00:14:34.592 "num_base_bdevs": 3, 00:14:34.592 "num_base_bdevs_discovered": 3, 00:14:34.592 "num_base_bdevs_operational": 3, 00:14:34.592 "base_bdevs_list": [ 00:14:34.592 { 00:14:34.592 "name": "BaseBdev1", 00:14:34.592 "uuid": "0c172d1b-654b-45a8-9acf-1b8bb28857cd", 00:14:34.592 "is_configured": true, 00:14:34.592 "data_offset": 2048, 00:14:34.592 "data_size": 63488 00:14:34.592 }, 00:14:34.592 { 00:14:34.592 "name": "BaseBdev2", 00:14:34.592 "uuid": "6c968b4a-cc42-4925-8a19-060b922b9583", 00:14:34.592 "is_configured": true, 00:14:34.592 "data_offset": 2048, 00:14:34.592 "data_size": 63488 00:14:34.592 }, 00:14:34.592 { 00:14:34.592 "name": "BaseBdev3", 00:14:34.592 "uuid": "82c07023-6156-420b-b2f4-8368b66f2fb9", 00:14:34.592 "is_configured": true, 00:14:34.592 "data_offset": 2048, 00:14:34.592 "data_size": 63488 00:14:34.592 } 00:14:34.592 ] 00:14:34.592 }' 00:14:34.592 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.592 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.160 [2024-11-06 09:07:11.465646] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:35.160 "name": "Existed_Raid", 00:14:35.160 "aliases": [ 00:14:35.160 "2be26415-d7a4-43ff-9bcb-049afe6b9c97" 00:14:35.160 ], 00:14:35.160 "product_name": "Raid Volume", 00:14:35.160 "block_size": 512, 00:14:35.160 "num_blocks": 190464, 00:14:35.160 "uuid": "2be26415-d7a4-43ff-9bcb-049afe6b9c97", 00:14:35.160 "assigned_rate_limits": { 00:14:35.160 "rw_ios_per_sec": 0, 00:14:35.160 "rw_mbytes_per_sec": 0, 00:14:35.160 "r_mbytes_per_sec": 0, 00:14:35.160 "w_mbytes_per_sec": 0 00:14:35.160 }, 00:14:35.160 "claimed": false, 00:14:35.160 "zoned": false, 00:14:35.160 "supported_io_types": { 00:14:35.160 "read": true, 00:14:35.160 "write": true, 00:14:35.160 "unmap": true, 00:14:35.160 "flush": true, 00:14:35.160 "reset": true, 00:14:35.160 "nvme_admin": false, 00:14:35.160 "nvme_io": false, 00:14:35.160 "nvme_io_md": false, 00:14:35.160 "write_zeroes": true, 00:14:35.160 "zcopy": false, 00:14:35.160 "get_zone_info": false, 00:14:35.160 "zone_management": false, 00:14:35.160 "zone_append": false, 00:14:35.160 "compare": false, 00:14:35.160 "compare_and_write": false, 00:14:35.160 "abort": false, 00:14:35.160 "seek_hole": false, 00:14:35.160 "seek_data": false, 00:14:35.160 "copy": false, 00:14:35.160 "nvme_iov_md": false 00:14:35.160 }, 00:14:35.160 "memory_domains": [ 00:14:35.160 { 00:14:35.160 "dma_device_id": "system", 00:14:35.160 "dma_device_type": 1 00:14:35.160 }, 00:14:35.160 { 00:14:35.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.160 "dma_device_type": 2 00:14:35.160 }, 00:14:35.160 { 00:14:35.160 "dma_device_id": "system", 00:14:35.160 "dma_device_type": 1 00:14:35.160 }, 00:14:35.160 { 00:14:35.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.160 "dma_device_type": 2 00:14:35.160 }, 00:14:35.160 { 00:14:35.160 "dma_device_id": "system", 00:14:35.160 "dma_device_type": 1 00:14:35.160 }, 00:14:35.160 { 00:14:35.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.160 "dma_device_type": 2 00:14:35.160 } 00:14:35.160 ], 00:14:35.160 "driver_specific": { 00:14:35.160 "raid": { 00:14:35.160 "uuid": "2be26415-d7a4-43ff-9bcb-049afe6b9c97", 00:14:35.160 "strip_size_kb": 64, 00:14:35.160 "state": "online", 00:14:35.160 "raid_level": "concat", 00:14:35.160 "superblock": true, 00:14:35.160 "num_base_bdevs": 3, 00:14:35.160 "num_base_bdevs_discovered": 3, 00:14:35.160 "num_base_bdevs_operational": 3, 00:14:35.160 "base_bdevs_list": [ 00:14:35.160 { 00:14:35.160 "name": "BaseBdev1", 00:14:35.160 "uuid": "0c172d1b-654b-45a8-9acf-1b8bb28857cd", 00:14:35.160 "is_configured": true, 00:14:35.160 "data_offset": 2048, 00:14:35.160 "data_size": 63488 00:14:35.160 }, 00:14:35.160 { 00:14:35.160 "name": "BaseBdev2", 00:14:35.160 "uuid": "6c968b4a-cc42-4925-8a19-060b922b9583", 00:14:35.160 "is_configured": true, 00:14:35.160 "data_offset": 2048, 00:14:35.160 "data_size": 63488 00:14:35.160 }, 00:14:35.160 { 00:14:35.160 "name": "BaseBdev3", 00:14:35.160 "uuid": "82c07023-6156-420b-b2f4-8368b66f2fb9", 00:14:35.160 "is_configured": true, 00:14:35.160 "data_offset": 2048, 00:14:35.160 "data_size": 63488 00:14:35.160 } 00:14:35.160 ] 00:14:35.160 } 00:14:35.160 } 00:14:35.160 }' 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:35.160 BaseBdev2 00:14:35.160 BaseBdev3' 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.160 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.161 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.161 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:35.161 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.161 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.161 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.161 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.420 [2024-11-06 09:07:11.797490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:35.420 [2024-11-06 09:07:11.797640] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.420 [2024-11-06 09:07:11.797877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.420 "name": "Existed_Raid", 00:14:35.420 "uuid": "2be26415-d7a4-43ff-9bcb-049afe6b9c97", 00:14:35.420 "strip_size_kb": 64, 00:14:35.420 "state": "offline", 00:14:35.420 "raid_level": "concat", 00:14:35.420 "superblock": true, 00:14:35.420 "num_base_bdevs": 3, 00:14:35.420 "num_base_bdevs_discovered": 2, 00:14:35.420 "num_base_bdevs_operational": 2, 00:14:35.420 "base_bdevs_list": [ 00:14:35.420 { 00:14:35.420 "name": null, 00:14:35.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.420 "is_configured": false, 00:14:35.420 "data_offset": 0, 00:14:35.420 "data_size": 63488 00:14:35.420 }, 00:14:35.420 { 00:14:35.420 "name": "BaseBdev2", 00:14:35.420 "uuid": "6c968b4a-cc42-4925-8a19-060b922b9583", 00:14:35.420 "is_configured": true, 00:14:35.420 "data_offset": 2048, 00:14:35.420 "data_size": 63488 00:14:35.420 }, 00:14:35.420 { 00:14:35.420 "name": "BaseBdev3", 00:14:35.420 "uuid": "82c07023-6156-420b-b2f4-8368b66f2fb9", 00:14:35.420 "is_configured": true, 00:14:35.420 "data_offset": 2048, 00:14:35.420 "data_size": 63488 00:14:35.420 } 00:14:35.420 ] 00:14:35.420 }' 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.420 09:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.987 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:35.988 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:35.988 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.988 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:35.988 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.988 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.988 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.988 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:35.988 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:35.988 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:35.988 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.988 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.988 [2024-11-06 09:07:12.484951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.246 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.246 [2024-11-06 09:07:12.639235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:36.246 [2024-11-06 09:07:12.639475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:36.247 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.247 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:36.247 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:36.247 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.247 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:36.247 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.247 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.247 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.506 BaseBdev2 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.506 [ 00:14:36.506 { 00:14:36.506 "name": "BaseBdev2", 00:14:36.506 "aliases": [ 00:14:36.506 "549ef86f-5a0c-4d9d-88a9-557e68b5a417" 00:14:36.506 ], 00:14:36.506 "product_name": "Malloc disk", 00:14:36.506 "block_size": 512, 00:14:36.506 "num_blocks": 65536, 00:14:36.506 "uuid": "549ef86f-5a0c-4d9d-88a9-557e68b5a417", 00:14:36.506 "assigned_rate_limits": { 00:14:36.506 "rw_ios_per_sec": 0, 00:14:36.506 "rw_mbytes_per_sec": 0, 00:14:36.506 "r_mbytes_per_sec": 0, 00:14:36.506 "w_mbytes_per_sec": 0 00:14:36.506 }, 00:14:36.506 "claimed": false, 00:14:36.506 "zoned": false, 00:14:36.506 "supported_io_types": { 00:14:36.506 "read": true, 00:14:36.506 "write": true, 00:14:36.506 "unmap": true, 00:14:36.506 "flush": true, 00:14:36.506 "reset": true, 00:14:36.506 "nvme_admin": false, 00:14:36.506 "nvme_io": false, 00:14:36.506 "nvme_io_md": false, 00:14:36.506 "write_zeroes": true, 00:14:36.506 "zcopy": true, 00:14:36.506 "get_zone_info": false, 00:14:36.506 "zone_management": false, 00:14:36.506 "zone_append": false, 00:14:36.506 "compare": false, 00:14:36.506 "compare_and_write": false, 00:14:36.506 "abort": true, 00:14:36.506 "seek_hole": false, 00:14:36.506 "seek_data": false, 00:14:36.506 "copy": true, 00:14:36.506 "nvme_iov_md": false 00:14:36.506 }, 00:14:36.506 "memory_domains": [ 00:14:36.506 { 00:14:36.506 "dma_device_id": "system", 00:14:36.506 "dma_device_type": 1 00:14:36.506 }, 00:14:36.506 { 00:14:36.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.506 "dma_device_type": 2 00:14:36.506 } 00:14:36.506 ], 00:14:36.506 "driver_specific": {} 00:14:36.506 } 00:14:36.506 ] 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.506 BaseBdev3 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.506 [ 00:14:36.506 { 00:14:36.506 "name": "BaseBdev3", 00:14:36.506 "aliases": [ 00:14:36.506 "f80c6ab8-37b7-40d6-b11b-6fd827ea377c" 00:14:36.506 ], 00:14:36.506 "product_name": "Malloc disk", 00:14:36.506 "block_size": 512, 00:14:36.506 "num_blocks": 65536, 00:14:36.506 "uuid": "f80c6ab8-37b7-40d6-b11b-6fd827ea377c", 00:14:36.506 "assigned_rate_limits": { 00:14:36.506 "rw_ios_per_sec": 0, 00:14:36.506 "rw_mbytes_per_sec": 0, 00:14:36.506 "r_mbytes_per_sec": 0, 00:14:36.506 "w_mbytes_per_sec": 0 00:14:36.506 }, 00:14:36.506 "claimed": false, 00:14:36.506 "zoned": false, 00:14:36.506 "supported_io_types": { 00:14:36.506 "read": true, 00:14:36.506 "write": true, 00:14:36.506 "unmap": true, 00:14:36.506 "flush": true, 00:14:36.506 "reset": true, 00:14:36.506 "nvme_admin": false, 00:14:36.506 "nvme_io": false, 00:14:36.506 "nvme_io_md": false, 00:14:36.506 "write_zeroes": true, 00:14:36.506 "zcopy": true, 00:14:36.506 "get_zone_info": false, 00:14:36.506 "zone_management": false, 00:14:36.506 "zone_append": false, 00:14:36.506 "compare": false, 00:14:36.506 "compare_and_write": false, 00:14:36.506 "abort": true, 00:14:36.506 "seek_hole": false, 00:14:36.506 "seek_data": false, 00:14:36.506 "copy": true, 00:14:36.506 "nvme_iov_md": false 00:14:36.506 }, 00:14:36.506 "memory_domains": [ 00:14:36.506 { 00:14:36.506 "dma_device_id": "system", 00:14:36.506 "dma_device_type": 1 00:14:36.506 }, 00:14:36.506 { 00:14:36.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.506 "dma_device_type": 2 00:14:36.506 } 00:14:36.506 ], 00:14:36.506 "driver_specific": {} 00:14:36.506 } 00:14:36.506 ] 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.506 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.506 [2024-11-06 09:07:12.927628] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.507 [2024-11-06 09:07:12.927806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.507 [2024-11-06 09:07:12.927954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.507 [2024-11-06 09:07:12.930454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.507 "name": "Existed_Raid", 00:14:36.507 "uuid": "aa2e87c1-3628-429b-ab96-ead090df424c", 00:14:36.507 "strip_size_kb": 64, 00:14:36.507 "state": "configuring", 00:14:36.507 "raid_level": "concat", 00:14:36.507 "superblock": true, 00:14:36.507 "num_base_bdevs": 3, 00:14:36.507 "num_base_bdevs_discovered": 2, 00:14:36.507 "num_base_bdevs_operational": 3, 00:14:36.507 "base_bdevs_list": [ 00:14:36.507 { 00:14:36.507 "name": "BaseBdev1", 00:14:36.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.507 "is_configured": false, 00:14:36.507 "data_offset": 0, 00:14:36.507 "data_size": 0 00:14:36.507 }, 00:14:36.507 { 00:14:36.507 "name": "BaseBdev2", 00:14:36.507 "uuid": "549ef86f-5a0c-4d9d-88a9-557e68b5a417", 00:14:36.507 "is_configured": true, 00:14:36.507 "data_offset": 2048, 00:14:36.507 "data_size": 63488 00:14:36.507 }, 00:14:36.507 { 00:14:36.507 "name": "BaseBdev3", 00:14:36.507 "uuid": "f80c6ab8-37b7-40d6-b11b-6fd827ea377c", 00:14:36.507 "is_configured": true, 00:14:36.507 "data_offset": 2048, 00:14:36.507 "data_size": 63488 00:14:36.507 } 00:14:36.507 ] 00:14:36.507 }' 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.507 09:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.074 [2024-11-06 09:07:13.451822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.074 "name": "Existed_Raid", 00:14:37.074 "uuid": "aa2e87c1-3628-429b-ab96-ead090df424c", 00:14:37.074 "strip_size_kb": 64, 00:14:37.074 "state": "configuring", 00:14:37.074 "raid_level": "concat", 00:14:37.074 "superblock": true, 00:14:37.074 "num_base_bdevs": 3, 00:14:37.074 "num_base_bdevs_discovered": 1, 00:14:37.074 "num_base_bdevs_operational": 3, 00:14:37.074 "base_bdevs_list": [ 00:14:37.074 { 00:14:37.074 "name": "BaseBdev1", 00:14:37.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.074 "is_configured": false, 00:14:37.074 "data_offset": 0, 00:14:37.074 "data_size": 0 00:14:37.074 }, 00:14:37.074 { 00:14:37.074 "name": null, 00:14:37.074 "uuid": "549ef86f-5a0c-4d9d-88a9-557e68b5a417", 00:14:37.074 "is_configured": false, 00:14:37.074 "data_offset": 0, 00:14:37.074 "data_size": 63488 00:14:37.074 }, 00:14:37.074 { 00:14:37.074 "name": "BaseBdev3", 00:14:37.074 "uuid": "f80c6ab8-37b7-40d6-b11b-6fd827ea377c", 00:14:37.074 "is_configured": true, 00:14:37.074 "data_offset": 2048, 00:14:37.074 "data_size": 63488 00:14:37.074 } 00:14:37.074 ] 00:14:37.074 }' 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.074 09:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.642 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.642 09:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.642 09:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.642 09:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:37.642 09:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.642 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:37.642 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:37.642 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.642 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.643 [2024-11-06 09:07:14.066213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.643 BaseBdev1 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.643 [ 00:14:37.643 { 00:14:37.643 "name": "BaseBdev1", 00:14:37.643 "aliases": [ 00:14:37.643 "d1d0b947-b8b1-4b5a-adb1-09ba80230db1" 00:14:37.643 ], 00:14:37.643 "product_name": "Malloc disk", 00:14:37.643 "block_size": 512, 00:14:37.643 "num_blocks": 65536, 00:14:37.643 "uuid": "d1d0b947-b8b1-4b5a-adb1-09ba80230db1", 00:14:37.643 "assigned_rate_limits": { 00:14:37.643 "rw_ios_per_sec": 0, 00:14:37.643 "rw_mbytes_per_sec": 0, 00:14:37.643 "r_mbytes_per_sec": 0, 00:14:37.643 "w_mbytes_per_sec": 0 00:14:37.643 }, 00:14:37.643 "claimed": true, 00:14:37.643 "claim_type": "exclusive_write", 00:14:37.643 "zoned": false, 00:14:37.643 "supported_io_types": { 00:14:37.643 "read": true, 00:14:37.643 "write": true, 00:14:37.643 "unmap": true, 00:14:37.643 "flush": true, 00:14:37.643 "reset": true, 00:14:37.643 "nvme_admin": false, 00:14:37.643 "nvme_io": false, 00:14:37.643 "nvme_io_md": false, 00:14:37.643 "write_zeroes": true, 00:14:37.643 "zcopy": true, 00:14:37.643 "get_zone_info": false, 00:14:37.643 "zone_management": false, 00:14:37.643 "zone_append": false, 00:14:37.643 "compare": false, 00:14:37.643 "compare_and_write": false, 00:14:37.643 "abort": true, 00:14:37.643 "seek_hole": false, 00:14:37.643 "seek_data": false, 00:14:37.643 "copy": true, 00:14:37.643 "nvme_iov_md": false 00:14:37.643 }, 00:14:37.643 "memory_domains": [ 00:14:37.643 { 00:14:37.643 "dma_device_id": "system", 00:14:37.643 "dma_device_type": 1 00:14:37.643 }, 00:14:37.643 { 00:14:37.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.643 "dma_device_type": 2 00:14:37.643 } 00:14:37.643 ], 00:14:37.643 "driver_specific": {} 00:14:37.643 } 00:14:37.643 ] 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.643 "name": "Existed_Raid", 00:14:37.643 "uuid": "aa2e87c1-3628-429b-ab96-ead090df424c", 00:14:37.643 "strip_size_kb": 64, 00:14:37.643 "state": "configuring", 00:14:37.643 "raid_level": "concat", 00:14:37.643 "superblock": true, 00:14:37.643 "num_base_bdevs": 3, 00:14:37.643 "num_base_bdevs_discovered": 2, 00:14:37.643 "num_base_bdevs_operational": 3, 00:14:37.643 "base_bdevs_list": [ 00:14:37.643 { 00:14:37.643 "name": "BaseBdev1", 00:14:37.643 "uuid": "d1d0b947-b8b1-4b5a-adb1-09ba80230db1", 00:14:37.643 "is_configured": true, 00:14:37.643 "data_offset": 2048, 00:14:37.643 "data_size": 63488 00:14:37.643 }, 00:14:37.643 { 00:14:37.643 "name": null, 00:14:37.643 "uuid": "549ef86f-5a0c-4d9d-88a9-557e68b5a417", 00:14:37.643 "is_configured": false, 00:14:37.643 "data_offset": 0, 00:14:37.643 "data_size": 63488 00:14:37.643 }, 00:14:37.643 { 00:14:37.643 "name": "BaseBdev3", 00:14:37.643 "uuid": "f80c6ab8-37b7-40d6-b11b-6fd827ea377c", 00:14:37.643 "is_configured": true, 00:14:37.643 "data_offset": 2048, 00:14:37.643 "data_size": 63488 00:14:37.643 } 00:14:37.643 ] 00:14:37.643 }' 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.643 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.210 [2024-11-06 09:07:14.674539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.210 "name": "Existed_Raid", 00:14:38.210 "uuid": "aa2e87c1-3628-429b-ab96-ead090df424c", 00:14:38.210 "strip_size_kb": 64, 00:14:38.210 "state": "configuring", 00:14:38.210 "raid_level": "concat", 00:14:38.210 "superblock": true, 00:14:38.210 "num_base_bdevs": 3, 00:14:38.210 "num_base_bdevs_discovered": 1, 00:14:38.210 "num_base_bdevs_operational": 3, 00:14:38.210 "base_bdevs_list": [ 00:14:38.210 { 00:14:38.210 "name": "BaseBdev1", 00:14:38.210 "uuid": "d1d0b947-b8b1-4b5a-adb1-09ba80230db1", 00:14:38.210 "is_configured": true, 00:14:38.210 "data_offset": 2048, 00:14:38.210 "data_size": 63488 00:14:38.210 }, 00:14:38.210 { 00:14:38.210 "name": null, 00:14:38.210 "uuid": "549ef86f-5a0c-4d9d-88a9-557e68b5a417", 00:14:38.210 "is_configured": false, 00:14:38.210 "data_offset": 0, 00:14:38.210 "data_size": 63488 00:14:38.210 }, 00:14:38.210 { 00:14:38.210 "name": null, 00:14:38.210 "uuid": "f80c6ab8-37b7-40d6-b11b-6fd827ea377c", 00:14:38.210 "is_configured": false, 00:14:38.210 "data_offset": 0, 00:14:38.210 "data_size": 63488 00:14:38.210 } 00:14:38.210 ] 00:14:38.210 }' 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.210 09:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.780 [2024-11-06 09:07:15.274726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.780 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.038 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.038 "name": "Existed_Raid", 00:14:39.038 "uuid": "aa2e87c1-3628-429b-ab96-ead090df424c", 00:14:39.038 "strip_size_kb": 64, 00:14:39.038 "state": "configuring", 00:14:39.038 "raid_level": "concat", 00:14:39.038 "superblock": true, 00:14:39.038 "num_base_bdevs": 3, 00:14:39.038 "num_base_bdevs_discovered": 2, 00:14:39.038 "num_base_bdevs_operational": 3, 00:14:39.038 "base_bdevs_list": [ 00:14:39.038 { 00:14:39.038 "name": "BaseBdev1", 00:14:39.038 "uuid": "d1d0b947-b8b1-4b5a-adb1-09ba80230db1", 00:14:39.038 "is_configured": true, 00:14:39.038 "data_offset": 2048, 00:14:39.038 "data_size": 63488 00:14:39.038 }, 00:14:39.038 { 00:14:39.038 "name": null, 00:14:39.038 "uuid": "549ef86f-5a0c-4d9d-88a9-557e68b5a417", 00:14:39.038 "is_configured": false, 00:14:39.038 "data_offset": 0, 00:14:39.038 "data_size": 63488 00:14:39.038 }, 00:14:39.038 { 00:14:39.038 "name": "BaseBdev3", 00:14:39.038 "uuid": "f80c6ab8-37b7-40d6-b11b-6fd827ea377c", 00:14:39.038 "is_configured": true, 00:14:39.038 "data_offset": 2048, 00:14:39.038 "data_size": 63488 00:14:39.038 } 00:14:39.038 ] 00:14:39.038 }' 00:14:39.038 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.038 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.605 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.605 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.605 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.605 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:39.605 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.605 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:39.605 09:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:39.605 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.605 09:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.605 [2024-11-06 09:07:15.914950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.605 "name": "Existed_Raid", 00:14:39.605 "uuid": "aa2e87c1-3628-429b-ab96-ead090df424c", 00:14:39.605 "strip_size_kb": 64, 00:14:39.605 "state": "configuring", 00:14:39.605 "raid_level": "concat", 00:14:39.605 "superblock": true, 00:14:39.605 "num_base_bdevs": 3, 00:14:39.605 "num_base_bdevs_discovered": 1, 00:14:39.605 "num_base_bdevs_operational": 3, 00:14:39.605 "base_bdevs_list": [ 00:14:39.605 { 00:14:39.605 "name": null, 00:14:39.605 "uuid": "d1d0b947-b8b1-4b5a-adb1-09ba80230db1", 00:14:39.605 "is_configured": false, 00:14:39.605 "data_offset": 0, 00:14:39.605 "data_size": 63488 00:14:39.605 }, 00:14:39.605 { 00:14:39.605 "name": null, 00:14:39.605 "uuid": "549ef86f-5a0c-4d9d-88a9-557e68b5a417", 00:14:39.605 "is_configured": false, 00:14:39.605 "data_offset": 0, 00:14:39.605 "data_size": 63488 00:14:39.605 }, 00:14:39.605 { 00:14:39.605 "name": "BaseBdev3", 00:14:39.605 "uuid": "f80c6ab8-37b7-40d6-b11b-6fd827ea377c", 00:14:39.605 "is_configured": true, 00:14:39.605 "data_offset": 2048, 00:14:39.605 "data_size": 63488 00:14:39.605 } 00:14:39.605 ] 00:14:39.605 }' 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.605 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.172 [2024-11-06 09:07:16.577932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.172 "name": "Existed_Raid", 00:14:40.172 "uuid": "aa2e87c1-3628-429b-ab96-ead090df424c", 00:14:40.172 "strip_size_kb": 64, 00:14:40.172 "state": "configuring", 00:14:40.172 "raid_level": "concat", 00:14:40.172 "superblock": true, 00:14:40.172 "num_base_bdevs": 3, 00:14:40.172 "num_base_bdevs_discovered": 2, 00:14:40.172 "num_base_bdevs_operational": 3, 00:14:40.172 "base_bdevs_list": [ 00:14:40.172 { 00:14:40.172 "name": null, 00:14:40.172 "uuid": "d1d0b947-b8b1-4b5a-adb1-09ba80230db1", 00:14:40.172 "is_configured": false, 00:14:40.172 "data_offset": 0, 00:14:40.172 "data_size": 63488 00:14:40.172 }, 00:14:40.172 { 00:14:40.172 "name": "BaseBdev2", 00:14:40.172 "uuid": "549ef86f-5a0c-4d9d-88a9-557e68b5a417", 00:14:40.172 "is_configured": true, 00:14:40.172 "data_offset": 2048, 00:14:40.172 "data_size": 63488 00:14:40.172 }, 00:14:40.172 { 00:14:40.172 "name": "BaseBdev3", 00:14:40.172 "uuid": "f80c6ab8-37b7-40d6-b11b-6fd827ea377c", 00:14:40.172 "is_configured": true, 00:14:40.172 "data_offset": 2048, 00:14:40.172 "data_size": 63488 00:14:40.172 } 00:14:40.172 ] 00:14:40.172 }' 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.172 09:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d1d0b947-b8b1-4b5a-adb1-09ba80230db1 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.739 [2024-11-06 09:07:17.264378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:40.739 NewBaseBdev 00:14:40.739 [2024-11-06 09:07:17.264876] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:40.739 [2024-11-06 09:07:17.264907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:40.739 [2024-11-06 09:07:17.265211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:40.739 [2024-11-06 09:07:17.265396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:40.739 [2024-11-06 09:07:17.265412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:40.739 [2024-11-06 09:07:17.265572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.739 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.998 [ 00:14:40.998 { 00:14:40.998 "name": "NewBaseBdev", 00:14:40.998 "aliases": [ 00:14:40.998 "d1d0b947-b8b1-4b5a-adb1-09ba80230db1" 00:14:40.998 ], 00:14:40.998 "product_name": "Malloc disk", 00:14:40.998 "block_size": 512, 00:14:40.998 "num_blocks": 65536, 00:14:40.998 "uuid": "d1d0b947-b8b1-4b5a-adb1-09ba80230db1", 00:14:40.998 "assigned_rate_limits": { 00:14:40.998 "rw_ios_per_sec": 0, 00:14:40.998 "rw_mbytes_per_sec": 0, 00:14:40.998 "r_mbytes_per_sec": 0, 00:14:40.998 "w_mbytes_per_sec": 0 00:14:40.998 }, 00:14:40.998 "claimed": true, 00:14:40.998 "claim_type": "exclusive_write", 00:14:40.998 "zoned": false, 00:14:40.998 "supported_io_types": { 00:14:40.998 "read": true, 00:14:40.998 "write": true, 00:14:40.998 "unmap": true, 00:14:40.998 "flush": true, 00:14:40.998 "reset": true, 00:14:40.998 "nvme_admin": false, 00:14:40.998 "nvme_io": false, 00:14:40.998 "nvme_io_md": false, 00:14:40.998 "write_zeroes": true, 00:14:40.998 "zcopy": true, 00:14:40.998 "get_zone_info": false, 00:14:40.998 "zone_management": false, 00:14:40.998 "zone_append": false, 00:14:40.998 "compare": false, 00:14:40.998 "compare_and_write": false, 00:14:40.998 "abort": true, 00:14:40.998 "seek_hole": false, 00:14:40.998 "seek_data": false, 00:14:40.998 "copy": true, 00:14:40.998 "nvme_iov_md": false 00:14:40.998 }, 00:14:40.998 "memory_domains": [ 00:14:40.998 { 00:14:40.998 "dma_device_id": "system", 00:14:40.998 "dma_device_type": 1 00:14:40.998 }, 00:14:40.998 { 00:14:40.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.998 "dma_device_type": 2 00:14:40.998 } 00:14:40.998 ], 00:14:40.998 "driver_specific": {} 00:14:40.998 } 00:14:40.998 ] 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.998 "name": "Existed_Raid", 00:14:40.998 "uuid": "aa2e87c1-3628-429b-ab96-ead090df424c", 00:14:40.998 "strip_size_kb": 64, 00:14:40.998 "state": "online", 00:14:40.998 "raid_level": "concat", 00:14:40.998 "superblock": true, 00:14:40.998 "num_base_bdevs": 3, 00:14:40.998 "num_base_bdevs_discovered": 3, 00:14:40.998 "num_base_bdevs_operational": 3, 00:14:40.998 "base_bdevs_list": [ 00:14:40.998 { 00:14:40.998 "name": "NewBaseBdev", 00:14:40.998 "uuid": "d1d0b947-b8b1-4b5a-adb1-09ba80230db1", 00:14:40.998 "is_configured": true, 00:14:40.998 "data_offset": 2048, 00:14:40.998 "data_size": 63488 00:14:40.998 }, 00:14:40.998 { 00:14:40.998 "name": "BaseBdev2", 00:14:40.998 "uuid": "549ef86f-5a0c-4d9d-88a9-557e68b5a417", 00:14:40.998 "is_configured": true, 00:14:40.998 "data_offset": 2048, 00:14:40.998 "data_size": 63488 00:14:40.998 }, 00:14:40.998 { 00:14:40.998 "name": "BaseBdev3", 00:14:40.998 "uuid": "f80c6ab8-37b7-40d6-b11b-6fd827ea377c", 00:14:40.998 "is_configured": true, 00:14:40.998 "data_offset": 2048, 00:14:40.998 "data_size": 63488 00:14:40.998 } 00:14:40.998 ] 00:14:40.998 }' 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.998 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.572 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:41.572 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:41.572 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:41.572 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:41.572 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:41.572 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:41.572 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:41.572 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:41.572 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.572 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.572 [2024-11-06 09:07:17.841010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.572 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.572 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:41.572 "name": "Existed_Raid", 00:14:41.572 "aliases": [ 00:14:41.572 "aa2e87c1-3628-429b-ab96-ead090df424c" 00:14:41.572 ], 00:14:41.572 "product_name": "Raid Volume", 00:14:41.572 "block_size": 512, 00:14:41.572 "num_blocks": 190464, 00:14:41.572 "uuid": "aa2e87c1-3628-429b-ab96-ead090df424c", 00:14:41.572 "assigned_rate_limits": { 00:14:41.572 "rw_ios_per_sec": 0, 00:14:41.572 "rw_mbytes_per_sec": 0, 00:14:41.572 "r_mbytes_per_sec": 0, 00:14:41.572 "w_mbytes_per_sec": 0 00:14:41.572 }, 00:14:41.572 "claimed": false, 00:14:41.572 "zoned": false, 00:14:41.572 "supported_io_types": { 00:14:41.572 "read": true, 00:14:41.572 "write": true, 00:14:41.572 "unmap": true, 00:14:41.572 "flush": true, 00:14:41.572 "reset": true, 00:14:41.572 "nvme_admin": false, 00:14:41.572 "nvme_io": false, 00:14:41.572 "nvme_io_md": false, 00:14:41.572 "write_zeroes": true, 00:14:41.572 "zcopy": false, 00:14:41.572 "get_zone_info": false, 00:14:41.572 "zone_management": false, 00:14:41.572 "zone_append": false, 00:14:41.572 "compare": false, 00:14:41.572 "compare_and_write": false, 00:14:41.572 "abort": false, 00:14:41.572 "seek_hole": false, 00:14:41.572 "seek_data": false, 00:14:41.572 "copy": false, 00:14:41.572 "nvme_iov_md": false 00:14:41.572 }, 00:14:41.572 "memory_domains": [ 00:14:41.572 { 00:14:41.572 "dma_device_id": "system", 00:14:41.573 "dma_device_type": 1 00:14:41.573 }, 00:14:41.573 { 00:14:41.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.573 "dma_device_type": 2 00:14:41.573 }, 00:14:41.573 { 00:14:41.573 "dma_device_id": "system", 00:14:41.573 "dma_device_type": 1 00:14:41.573 }, 00:14:41.573 { 00:14:41.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.573 "dma_device_type": 2 00:14:41.573 }, 00:14:41.573 { 00:14:41.573 "dma_device_id": "system", 00:14:41.573 "dma_device_type": 1 00:14:41.573 }, 00:14:41.573 { 00:14:41.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.573 "dma_device_type": 2 00:14:41.573 } 00:14:41.573 ], 00:14:41.573 "driver_specific": { 00:14:41.573 "raid": { 00:14:41.573 "uuid": "aa2e87c1-3628-429b-ab96-ead090df424c", 00:14:41.573 "strip_size_kb": 64, 00:14:41.573 "state": "online", 00:14:41.573 "raid_level": "concat", 00:14:41.573 "superblock": true, 00:14:41.573 "num_base_bdevs": 3, 00:14:41.573 "num_base_bdevs_discovered": 3, 00:14:41.573 "num_base_bdevs_operational": 3, 00:14:41.573 "base_bdevs_list": [ 00:14:41.573 { 00:14:41.573 "name": "NewBaseBdev", 00:14:41.573 "uuid": "d1d0b947-b8b1-4b5a-adb1-09ba80230db1", 00:14:41.573 "is_configured": true, 00:14:41.573 "data_offset": 2048, 00:14:41.573 "data_size": 63488 00:14:41.573 }, 00:14:41.573 { 00:14:41.573 "name": "BaseBdev2", 00:14:41.573 "uuid": "549ef86f-5a0c-4d9d-88a9-557e68b5a417", 00:14:41.573 "is_configured": true, 00:14:41.573 "data_offset": 2048, 00:14:41.573 "data_size": 63488 00:14:41.573 }, 00:14:41.573 { 00:14:41.573 "name": "BaseBdev3", 00:14:41.573 "uuid": "f80c6ab8-37b7-40d6-b11b-6fd827ea377c", 00:14:41.573 "is_configured": true, 00:14:41.573 "data_offset": 2048, 00:14:41.573 "data_size": 63488 00:14:41.573 } 00:14:41.573 ] 00:14:41.573 } 00:14:41.573 } 00:14:41.573 }' 00:14:41.573 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:41.573 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:41.573 BaseBdev2 00:14:41.573 BaseBdev3' 00:14:41.573 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.573 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:41.573 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.573 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.573 09:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:41.573 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.573 09:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.573 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.832 [2024-11-06 09:07:18.160725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:41.832 [2024-11-06 09:07:18.160899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.832 [2024-11-06 09:07:18.161107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.832 [2024-11-06 09:07:18.161290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.832 [2024-11-06 09:07:18.161324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66195 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 66195 ']' 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 66195 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66195 00:14:41.832 killing process with pid 66195 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66195' 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 66195 00:14:41.832 [2024-11-06 09:07:18.198438] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.832 09:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 66195 00:14:42.091 [2024-11-06 09:07:18.474692] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.026 09:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:43.026 00:14:43.026 real 0m11.881s 00:14:43.026 user 0m19.790s 00:14:43.026 sys 0m1.562s 00:14:43.026 09:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:43.026 ************************************ 00:14:43.026 END TEST raid_state_function_test_sb 00:14:43.026 ************************************ 00:14:43.026 09:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.026 09:07:19 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:14:43.026 09:07:19 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:43.026 09:07:19 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:43.026 09:07:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.026 ************************************ 00:14:43.026 START TEST raid_superblock_test 00:14:43.026 ************************************ 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66832 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66832 00:14:43.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 66832 ']' 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:43.026 09:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.292 [2024-11-06 09:07:19.651801] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:14:43.292 [2024-11-06 09:07:19.652031] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66832 ] 00:14:43.555 [2024-11-06 09:07:19.837964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.555 [2024-11-06 09:07:19.967826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.814 [2024-11-06 09:07:20.167269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.814 [2024-11-06 09:07:20.167523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.381 malloc1 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.381 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.381 [2024-11-06 09:07:20.663557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:44.381 [2024-11-06 09:07:20.663769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.381 [2024-11-06 09:07:20.663943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:44.381 [2024-11-06 09:07:20.664071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.382 [2024-11-06 09:07:20.666923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.382 [2024-11-06 09:07:20.667082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:44.382 pt1 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.382 malloc2 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.382 [2024-11-06 09:07:20.716837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:44.382 [2024-11-06 09:07:20.716907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.382 [2024-11-06 09:07:20.716937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:44.382 [2024-11-06 09:07:20.716951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.382 [2024-11-06 09:07:20.719616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.382 pt2 00:14:44.382 [2024-11-06 09:07:20.719781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.382 malloc3 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.382 [2024-11-06 09:07:20.793719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:44.382 [2024-11-06 09:07:20.793918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.382 [2024-11-06 09:07:20.794082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:44.382 [2024-11-06 09:07:20.794219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.382 [2024-11-06 09:07:20.796994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.382 [2024-11-06 09:07:20.797153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:44.382 pt3 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.382 [2024-11-06 09:07:20.805880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:44.382 [2024-11-06 09:07:20.808314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:44.382 [2024-11-06 09:07:20.808416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:44.382 [2024-11-06 09:07:20.808634] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:44.382 [2024-11-06 09:07:20.808657] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:44.382 [2024-11-06 09:07:20.808978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:44.382 [2024-11-06 09:07:20.809208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:44.382 [2024-11-06 09:07:20.809224] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:44.382 [2024-11-06 09:07:20.809390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.382 "name": "raid_bdev1", 00:14:44.382 "uuid": "9edea40d-40ed-4f96-a22a-cd63327a8023", 00:14:44.382 "strip_size_kb": 64, 00:14:44.382 "state": "online", 00:14:44.382 "raid_level": "concat", 00:14:44.382 "superblock": true, 00:14:44.382 "num_base_bdevs": 3, 00:14:44.382 "num_base_bdevs_discovered": 3, 00:14:44.382 "num_base_bdevs_operational": 3, 00:14:44.382 "base_bdevs_list": [ 00:14:44.382 { 00:14:44.382 "name": "pt1", 00:14:44.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:44.382 "is_configured": true, 00:14:44.382 "data_offset": 2048, 00:14:44.382 "data_size": 63488 00:14:44.382 }, 00:14:44.382 { 00:14:44.382 "name": "pt2", 00:14:44.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.382 "is_configured": true, 00:14:44.382 "data_offset": 2048, 00:14:44.382 "data_size": 63488 00:14:44.382 }, 00:14:44.382 { 00:14:44.382 "name": "pt3", 00:14:44.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.382 "is_configured": true, 00:14:44.382 "data_offset": 2048, 00:14:44.382 "data_size": 63488 00:14:44.382 } 00:14:44.382 ] 00:14:44.382 }' 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.382 09:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.950 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:44.950 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:44.950 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:44.950 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:44.950 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:44.950 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:44.950 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:44.950 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.950 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.950 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:44.950 [2024-11-06 09:07:21.330377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.950 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.950 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:44.950 "name": "raid_bdev1", 00:14:44.950 "aliases": [ 00:14:44.950 "9edea40d-40ed-4f96-a22a-cd63327a8023" 00:14:44.950 ], 00:14:44.950 "product_name": "Raid Volume", 00:14:44.950 "block_size": 512, 00:14:44.951 "num_blocks": 190464, 00:14:44.951 "uuid": "9edea40d-40ed-4f96-a22a-cd63327a8023", 00:14:44.951 "assigned_rate_limits": { 00:14:44.951 "rw_ios_per_sec": 0, 00:14:44.951 "rw_mbytes_per_sec": 0, 00:14:44.951 "r_mbytes_per_sec": 0, 00:14:44.951 "w_mbytes_per_sec": 0 00:14:44.951 }, 00:14:44.951 "claimed": false, 00:14:44.951 "zoned": false, 00:14:44.951 "supported_io_types": { 00:14:44.951 "read": true, 00:14:44.951 "write": true, 00:14:44.951 "unmap": true, 00:14:44.951 "flush": true, 00:14:44.951 "reset": true, 00:14:44.951 "nvme_admin": false, 00:14:44.951 "nvme_io": false, 00:14:44.951 "nvme_io_md": false, 00:14:44.951 "write_zeroes": true, 00:14:44.951 "zcopy": false, 00:14:44.951 "get_zone_info": false, 00:14:44.951 "zone_management": false, 00:14:44.951 "zone_append": false, 00:14:44.951 "compare": false, 00:14:44.951 "compare_and_write": false, 00:14:44.951 "abort": false, 00:14:44.951 "seek_hole": false, 00:14:44.951 "seek_data": false, 00:14:44.951 "copy": false, 00:14:44.951 "nvme_iov_md": false 00:14:44.951 }, 00:14:44.951 "memory_domains": [ 00:14:44.951 { 00:14:44.951 "dma_device_id": "system", 00:14:44.951 "dma_device_type": 1 00:14:44.951 }, 00:14:44.951 { 00:14:44.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.951 "dma_device_type": 2 00:14:44.951 }, 00:14:44.951 { 00:14:44.951 "dma_device_id": "system", 00:14:44.951 "dma_device_type": 1 00:14:44.951 }, 00:14:44.951 { 00:14:44.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.951 "dma_device_type": 2 00:14:44.951 }, 00:14:44.951 { 00:14:44.951 "dma_device_id": "system", 00:14:44.951 "dma_device_type": 1 00:14:44.951 }, 00:14:44.951 { 00:14:44.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.951 "dma_device_type": 2 00:14:44.951 } 00:14:44.951 ], 00:14:44.951 "driver_specific": { 00:14:44.951 "raid": { 00:14:44.951 "uuid": "9edea40d-40ed-4f96-a22a-cd63327a8023", 00:14:44.951 "strip_size_kb": 64, 00:14:44.951 "state": "online", 00:14:44.951 "raid_level": "concat", 00:14:44.951 "superblock": true, 00:14:44.951 "num_base_bdevs": 3, 00:14:44.951 "num_base_bdevs_discovered": 3, 00:14:44.951 "num_base_bdevs_operational": 3, 00:14:44.951 "base_bdevs_list": [ 00:14:44.951 { 00:14:44.951 "name": "pt1", 00:14:44.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:44.951 "is_configured": true, 00:14:44.951 "data_offset": 2048, 00:14:44.951 "data_size": 63488 00:14:44.951 }, 00:14:44.951 { 00:14:44.951 "name": "pt2", 00:14:44.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.951 "is_configured": true, 00:14:44.951 "data_offset": 2048, 00:14:44.951 "data_size": 63488 00:14:44.951 }, 00:14:44.951 { 00:14:44.951 "name": "pt3", 00:14:44.951 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.951 "is_configured": true, 00:14:44.951 "data_offset": 2048, 00:14:44.951 "data_size": 63488 00:14:44.951 } 00:14:44.951 ] 00:14:44.951 } 00:14:44.951 } 00:14:44.951 }' 00:14:44.951 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:44.951 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:44.951 pt2 00:14:44.951 pt3' 00:14:44.951 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:45.210 [2024-11-06 09:07:21.674451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9edea40d-40ed-4f96-a22a-cd63327a8023 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9edea40d-40ed-4f96-a22a-cd63327a8023 ']' 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.210 [2024-11-06 09:07:21.722108] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.210 [2024-11-06 09:07:21.722261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.210 [2024-11-06 09:07:21.722470] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.210 [2024-11-06 09:07:21.722564] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.210 [2024-11-06 09:07:21.722590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:45.210 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.469 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.470 [2024-11-06 09:07:21.866183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:45.470 [2024-11-06 09:07:21.868766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:45.470 [2024-11-06 09:07:21.868967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:45.470 [2024-11-06 09:07:21.869047] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:45.470 [2024-11-06 09:07:21.869121] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:45.470 [2024-11-06 09:07:21.869154] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:45.470 [2024-11-06 09:07:21.869180] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.470 [2024-11-06 09:07:21.869193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:45.470 request: 00:14:45.470 { 00:14:45.470 "name": "raid_bdev1", 00:14:45.470 "raid_level": "concat", 00:14:45.470 "base_bdevs": [ 00:14:45.470 "malloc1", 00:14:45.470 "malloc2", 00:14:45.470 "malloc3" 00:14:45.470 ], 00:14:45.470 "strip_size_kb": 64, 00:14:45.470 "superblock": false, 00:14:45.470 "method": "bdev_raid_create", 00:14:45.470 "req_id": 1 00:14:45.470 } 00:14:45.470 Got JSON-RPC error response 00:14:45.470 response: 00:14:45.470 { 00:14:45.470 "code": -17, 00:14:45.470 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:45.470 } 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.470 [2024-11-06 09:07:21.930184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:45.470 [2024-11-06 09:07:21.930422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.470 [2024-11-06 09:07:21.930511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:45.470 [2024-11-06 09:07:21.930668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.470 [2024-11-06 09:07:21.933681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.470 [2024-11-06 09:07:21.933837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:45.470 [2024-11-06 09:07:21.934091] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:45.470 [2024-11-06 09:07:21.934257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:45.470 pt1 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.470 09:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.470 "name": "raid_bdev1", 00:14:45.470 "uuid": "9edea40d-40ed-4f96-a22a-cd63327a8023", 00:14:45.470 "strip_size_kb": 64, 00:14:45.470 "state": "configuring", 00:14:45.470 "raid_level": "concat", 00:14:45.470 "superblock": true, 00:14:45.470 "num_base_bdevs": 3, 00:14:45.470 "num_base_bdevs_discovered": 1, 00:14:45.470 "num_base_bdevs_operational": 3, 00:14:45.470 "base_bdevs_list": [ 00:14:45.470 { 00:14:45.470 "name": "pt1", 00:14:45.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:45.470 "is_configured": true, 00:14:45.470 "data_offset": 2048, 00:14:45.470 "data_size": 63488 00:14:45.470 }, 00:14:45.470 { 00:14:45.470 "name": null, 00:14:45.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.470 "is_configured": false, 00:14:45.470 "data_offset": 2048, 00:14:45.470 "data_size": 63488 00:14:45.470 }, 00:14:45.470 { 00:14:45.470 "name": null, 00:14:45.470 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:45.470 "is_configured": false, 00:14:45.470 "data_offset": 2048, 00:14:45.470 "data_size": 63488 00:14:45.470 } 00:14:45.470 ] 00:14:45.470 }' 00:14:45.470 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.470 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.037 [2024-11-06 09:07:22.458353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:46.037 [2024-11-06 09:07:22.458587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.037 [2024-11-06 09:07:22.458633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:46.037 [2024-11-06 09:07:22.458649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.037 [2024-11-06 09:07:22.459228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.037 [2024-11-06 09:07:22.459259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:46.037 [2024-11-06 09:07:22.459381] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:46.037 [2024-11-06 09:07:22.459412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:46.037 pt2 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.037 [2024-11-06 09:07:22.466327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.037 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.037 "name": "raid_bdev1", 00:14:46.037 "uuid": "9edea40d-40ed-4f96-a22a-cd63327a8023", 00:14:46.038 "strip_size_kb": 64, 00:14:46.038 "state": "configuring", 00:14:46.038 "raid_level": "concat", 00:14:46.038 "superblock": true, 00:14:46.038 "num_base_bdevs": 3, 00:14:46.038 "num_base_bdevs_discovered": 1, 00:14:46.038 "num_base_bdevs_operational": 3, 00:14:46.038 "base_bdevs_list": [ 00:14:46.038 { 00:14:46.038 "name": "pt1", 00:14:46.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:46.038 "is_configured": true, 00:14:46.038 "data_offset": 2048, 00:14:46.038 "data_size": 63488 00:14:46.038 }, 00:14:46.038 { 00:14:46.038 "name": null, 00:14:46.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:46.038 "is_configured": false, 00:14:46.038 "data_offset": 0, 00:14:46.038 "data_size": 63488 00:14:46.038 }, 00:14:46.038 { 00:14:46.038 "name": null, 00:14:46.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:46.038 "is_configured": false, 00:14:46.038 "data_offset": 2048, 00:14:46.038 "data_size": 63488 00:14:46.038 } 00:14:46.038 ] 00:14:46.038 }' 00:14:46.038 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.038 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.605 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:46.605 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:46.605 09:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:46.605 09:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.605 [2024-11-06 09:07:23.006520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:46.605 [2024-11-06 09:07:23.006742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.605 [2024-11-06 09:07:23.006900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:46.605 [2024-11-06 09:07:23.007027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.605 [2024-11-06 09:07:23.007666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.605 [2024-11-06 09:07:23.007722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:46.605 [2024-11-06 09:07:23.007823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:46.605 [2024-11-06 09:07:23.008037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:46.605 pt2 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.605 [2024-11-06 09:07:23.018494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:46.605 [2024-11-06 09:07:23.018687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.605 [2024-11-06 09:07:23.018753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:46.605 [2024-11-06 09:07:23.018984] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.605 [2024-11-06 09:07:23.019491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.605 [2024-11-06 09:07:23.019707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:46.605 [2024-11-06 09:07:23.019898] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:46.605 [2024-11-06 09:07:23.019941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:46.605 [2024-11-06 09:07:23.020085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:46.605 [2024-11-06 09:07:23.020106] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:46.605 [2024-11-06 09:07:23.020580] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:46.605 [2024-11-06 09:07:23.020760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:46.605 [2024-11-06 09:07:23.020790] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:46.605 [2024-11-06 09:07:23.020984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.605 pt3 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.605 "name": "raid_bdev1", 00:14:46.605 "uuid": "9edea40d-40ed-4f96-a22a-cd63327a8023", 00:14:46.605 "strip_size_kb": 64, 00:14:46.605 "state": "online", 00:14:46.605 "raid_level": "concat", 00:14:46.605 "superblock": true, 00:14:46.605 "num_base_bdevs": 3, 00:14:46.605 "num_base_bdevs_discovered": 3, 00:14:46.605 "num_base_bdevs_operational": 3, 00:14:46.605 "base_bdevs_list": [ 00:14:46.605 { 00:14:46.605 "name": "pt1", 00:14:46.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:46.605 "is_configured": true, 00:14:46.605 "data_offset": 2048, 00:14:46.605 "data_size": 63488 00:14:46.605 }, 00:14:46.605 { 00:14:46.605 "name": "pt2", 00:14:46.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:46.605 "is_configured": true, 00:14:46.605 "data_offset": 2048, 00:14:46.605 "data_size": 63488 00:14:46.605 }, 00:14:46.605 { 00:14:46.605 "name": "pt3", 00:14:46.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:46.605 "is_configured": true, 00:14:46.605 "data_offset": 2048, 00:14:46.605 "data_size": 63488 00:14:46.605 } 00:14:46.605 ] 00:14:46.605 }' 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.605 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:47.175 [2024-11-06 09:07:23.559112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:47.175 "name": "raid_bdev1", 00:14:47.175 "aliases": [ 00:14:47.175 "9edea40d-40ed-4f96-a22a-cd63327a8023" 00:14:47.175 ], 00:14:47.175 "product_name": "Raid Volume", 00:14:47.175 "block_size": 512, 00:14:47.175 "num_blocks": 190464, 00:14:47.175 "uuid": "9edea40d-40ed-4f96-a22a-cd63327a8023", 00:14:47.175 "assigned_rate_limits": { 00:14:47.175 "rw_ios_per_sec": 0, 00:14:47.175 "rw_mbytes_per_sec": 0, 00:14:47.175 "r_mbytes_per_sec": 0, 00:14:47.175 "w_mbytes_per_sec": 0 00:14:47.175 }, 00:14:47.175 "claimed": false, 00:14:47.175 "zoned": false, 00:14:47.175 "supported_io_types": { 00:14:47.175 "read": true, 00:14:47.175 "write": true, 00:14:47.175 "unmap": true, 00:14:47.175 "flush": true, 00:14:47.175 "reset": true, 00:14:47.175 "nvme_admin": false, 00:14:47.175 "nvme_io": false, 00:14:47.175 "nvme_io_md": false, 00:14:47.175 "write_zeroes": true, 00:14:47.175 "zcopy": false, 00:14:47.175 "get_zone_info": false, 00:14:47.175 "zone_management": false, 00:14:47.175 "zone_append": false, 00:14:47.175 "compare": false, 00:14:47.175 "compare_and_write": false, 00:14:47.175 "abort": false, 00:14:47.175 "seek_hole": false, 00:14:47.175 "seek_data": false, 00:14:47.175 "copy": false, 00:14:47.175 "nvme_iov_md": false 00:14:47.175 }, 00:14:47.175 "memory_domains": [ 00:14:47.175 { 00:14:47.175 "dma_device_id": "system", 00:14:47.175 "dma_device_type": 1 00:14:47.175 }, 00:14:47.175 { 00:14:47.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.175 "dma_device_type": 2 00:14:47.175 }, 00:14:47.175 { 00:14:47.175 "dma_device_id": "system", 00:14:47.175 "dma_device_type": 1 00:14:47.175 }, 00:14:47.175 { 00:14:47.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.175 "dma_device_type": 2 00:14:47.175 }, 00:14:47.175 { 00:14:47.175 "dma_device_id": "system", 00:14:47.175 "dma_device_type": 1 00:14:47.175 }, 00:14:47.175 { 00:14:47.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.175 "dma_device_type": 2 00:14:47.175 } 00:14:47.175 ], 00:14:47.175 "driver_specific": { 00:14:47.175 "raid": { 00:14:47.175 "uuid": "9edea40d-40ed-4f96-a22a-cd63327a8023", 00:14:47.175 "strip_size_kb": 64, 00:14:47.175 "state": "online", 00:14:47.175 "raid_level": "concat", 00:14:47.175 "superblock": true, 00:14:47.175 "num_base_bdevs": 3, 00:14:47.175 "num_base_bdevs_discovered": 3, 00:14:47.175 "num_base_bdevs_operational": 3, 00:14:47.175 "base_bdevs_list": [ 00:14:47.175 { 00:14:47.175 "name": "pt1", 00:14:47.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:47.175 "is_configured": true, 00:14:47.175 "data_offset": 2048, 00:14:47.175 "data_size": 63488 00:14:47.175 }, 00:14:47.175 { 00:14:47.175 "name": "pt2", 00:14:47.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:47.175 "is_configured": true, 00:14:47.175 "data_offset": 2048, 00:14:47.175 "data_size": 63488 00:14:47.175 }, 00:14:47.175 { 00:14:47.175 "name": "pt3", 00:14:47.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:47.175 "is_configured": true, 00:14:47.175 "data_offset": 2048, 00:14:47.175 "data_size": 63488 00:14:47.175 } 00:14:47.175 ] 00:14:47.175 } 00:14:47.175 } 00:14:47.175 }' 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:47.175 pt2 00:14:47.175 pt3' 00:14:47.175 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:47.434 [2024-11-06 09:07:23.883179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9edea40d-40ed-4f96-a22a-cd63327a8023 '!=' 9edea40d-40ed-4f96-a22a-cd63327a8023 ']' 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66832 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 66832 ']' 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 66832 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66832 00:14:47.434 killing process with pid 66832 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:47.434 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66832' 00:14:47.435 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 66832 00:14:47.435 [2024-11-06 09:07:23.966070] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:47.435 09:07:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 66832 00:14:47.435 [2024-11-06 09:07:23.966189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.435 [2024-11-06 09:07:23.966267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.435 [2024-11-06 09:07:23.966286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:48.002 [2024-11-06 09:07:24.233415] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:48.938 09:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:48.938 00:14:48.938 real 0m5.743s 00:14:48.938 user 0m8.635s 00:14:48.938 sys 0m0.838s 00:14:48.938 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:48.938 ************************************ 00:14:48.938 END TEST raid_superblock_test 00:14:48.938 ************************************ 00:14:48.938 09:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.938 09:07:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:14:48.938 09:07:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:48.938 09:07:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:48.938 09:07:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:48.938 ************************************ 00:14:48.938 START TEST raid_read_error_test 00:14:48.938 ************************************ 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cRwRae5lqK 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67085 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:48.938 09:07:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67085 00:14:48.939 09:07:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 67085 ']' 00:14:48.939 09:07:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.939 09:07:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:48.939 09:07:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.939 09:07:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:48.939 09:07:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.939 [2024-11-06 09:07:25.465639] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:14:48.939 [2024-11-06 09:07:25.466053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67085 ] 00:14:49.198 [2024-11-06 09:07:25.658433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.457 [2024-11-06 09:07:25.820437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.716 [2024-11-06 09:07:26.034995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.716 [2024-11-06 09:07:26.035293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.975 BaseBdev1_malloc 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.975 true 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.975 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.975 [2024-11-06 09:07:26.503537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:49.975 [2024-11-06 09:07:26.503735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:49.975 [2024-11-06 09:07:26.503831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:49.975 [2024-11-06 09:07:26.503967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:49.975 [2024-11-06 09:07:26.506739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:49.975 [2024-11-06 09:07:26.506790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:50.234 BaseBdev1 00:14:50.234 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.234 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:50.234 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:50.234 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.234 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.234 BaseBdev2_malloc 00:14:50.234 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.234 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:50.234 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 true 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 [2024-11-06 09:07:26.563837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:50.235 [2024-11-06 09:07:26.564068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.235 [2024-11-06 09:07:26.564106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:50.235 [2024-11-06 09:07:26.564125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.235 [2024-11-06 09:07:26.566994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.235 [2024-11-06 09:07:26.567057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:50.235 BaseBdev2 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 BaseBdev3_malloc 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 true 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 [2024-11-06 09:07:26.637475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:50.235 [2024-11-06 09:07:26.637543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.235 [2024-11-06 09:07:26.637570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:50.235 [2024-11-06 09:07:26.637587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.235 [2024-11-06 09:07:26.640338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.235 [2024-11-06 09:07:26.640387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:50.235 BaseBdev3 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 [2024-11-06 09:07:26.645562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.235 [2024-11-06 09:07:26.648101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.235 [2024-11-06 09:07:26.648339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.235 [2024-11-06 09:07:26.648610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:50.235 [2024-11-06 09:07:26.648629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:50.235 [2024-11-06 09:07:26.648966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:50.235 [2024-11-06 09:07:26.649169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:50.235 [2024-11-06 09:07:26.649192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:50.235 [2024-11-06 09:07:26.649369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.235 "name": "raid_bdev1", 00:14:50.235 "uuid": "fa107f8e-495f-4351-aa82-309f66b04e8e", 00:14:50.235 "strip_size_kb": 64, 00:14:50.235 "state": "online", 00:14:50.235 "raid_level": "concat", 00:14:50.235 "superblock": true, 00:14:50.235 "num_base_bdevs": 3, 00:14:50.235 "num_base_bdevs_discovered": 3, 00:14:50.235 "num_base_bdevs_operational": 3, 00:14:50.235 "base_bdevs_list": [ 00:14:50.235 { 00:14:50.235 "name": "BaseBdev1", 00:14:50.235 "uuid": "4261a418-5079-5bc8-897d-1ba8db5e9e03", 00:14:50.235 "is_configured": true, 00:14:50.235 "data_offset": 2048, 00:14:50.235 "data_size": 63488 00:14:50.235 }, 00:14:50.235 { 00:14:50.235 "name": "BaseBdev2", 00:14:50.235 "uuid": "2030542c-77e0-5989-b734-e3a3feaafe1c", 00:14:50.235 "is_configured": true, 00:14:50.235 "data_offset": 2048, 00:14:50.235 "data_size": 63488 00:14:50.235 }, 00:14:50.235 { 00:14:50.235 "name": "BaseBdev3", 00:14:50.235 "uuid": "8fdccea5-c881-5a5f-81c1-3b9655160c2e", 00:14:50.235 "is_configured": true, 00:14:50.235 "data_offset": 2048, 00:14:50.235 "data_size": 63488 00:14:50.235 } 00:14:50.235 ] 00:14:50.235 }' 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.235 09:07:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.804 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:50.804 09:07:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:50.804 [2024-11-06 09:07:27.299247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.739 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.739 "name": "raid_bdev1", 00:14:51.739 "uuid": "fa107f8e-495f-4351-aa82-309f66b04e8e", 00:14:51.739 "strip_size_kb": 64, 00:14:51.739 "state": "online", 00:14:51.739 "raid_level": "concat", 00:14:51.739 "superblock": true, 00:14:51.739 "num_base_bdevs": 3, 00:14:51.739 "num_base_bdevs_discovered": 3, 00:14:51.739 "num_base_bdevs_operational": 3, 00:14:51.739 "base_bdevs_list": [ 00:14:51.739 { 00:14:51.739 "name": "BaseBdev1", 00:14:51.739 "uuid": "4261a418-5079-5bc8-897d-1ba8db5e9e03", 00:14:51.739 "is_configured": true, 00:14:51.740 "data_offset": 2048, 00:14:51.740 "data_size": 63488 00:14:51.740 }, 00:14:51.740 { 00:14:51.740 "name": "BaseBdev2", 00:14:51.740 "uuid": "2030542c-77e0-5989-b734-e3a3feaafe1c", 00:14:51.740 "is_configured": true, 00:14:51.740 "data_offset": 2048, 00:14:51.740 "data_size": 63488 00:14:51.740 }, 00:14:51.740 { 00:14:51.740 "name": "BaseBdev3", 00:14:51.740 "uuid": "8fdccea5-c881-5a5f-81c1-3b9655160c2e", 00:14:51.740 "is_configured": true, 00:14:51.740 "data_offset": 2048, 00:14:51.740 "data_size": 63488 00:14:51.740 } 00:14:51.740 ] 00:14:51.740 }' 00:14:51.740 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.740 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.307 [2024-11-06 09:07:28.702361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:52.307 [2024-11-06 09:07:28.702558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.307 [2024-11-06 09:07:28.706243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.307 [2024-11-06 09:07:28.706547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.307 [2024-11-06 09:07:28.706740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.307 [2024-11-06 09:07:28.706910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:52.307 { 00:14:52.307 "results": [ 00:14:52.307 { 00:14:52.307 "job": "raid_bdev1", 00:14:52.307 "core_mask": "0x1", 00:14:52.307 "workload": "randrw", 00:14:52.307 "percentage": 50, 00:14:52.307 "status": "finished", 00:14:52.307 "queue_depth": 1, 00:14:52.307 "io_size": 131072, 00:14:52.307 "runtime": 1.400632, 00:14:52.307 "iops": 10772.993905608326, 00:14:52.307 "mibps": 1346.6242382010407, 00:14:52.307 "io_failed": 1, 00:14:52.307 "io_timeout": 0, 00:14:52.307 "avg_latency_us": 129.5817974576782, 00:14:52.307 "min_latency_us": 38.4, 00:14:52.307 "max_latency_us": 1854.370909090909 00:14:52.307 } 00:14:52.307 ], 00:14:52.307 "core_count": 1 00:14:52.307 } 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67085 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 67085 ']' 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 67085 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67085 00:14:52.307 killing process with pid 67085 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67085' 00:14:52.307 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 67085 00:14:52.308 [2024-11-06 09:07:28.748025] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:52.308 09:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 67085 00:14:52.566 [2024-11-06 09:07:28.960930] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.945 09:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cRwRae5lqK 00:14:53.945 09:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:53.945 09:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:53.945 09:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:53.945 09:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:53.945 09:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:53.945 09:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:53.945 09:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:53.945 00:14:53.945 real 0m4.749s 00:14:53.945 user 0m5.881s 00:14:53.945 sys 0m0.584s 00:14:53.945 ************************************ 00:14:53.945 END TEST raid_read_error_test 00:14:53.945 ************************************ 00:14:53.945 09:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:53.945 09:07:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.945 09:07:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:14:53.945 09:07:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:53.945 09:07:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:53.945 09:07:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.945 ************************************ 00:14:53.945 START TEST raid_write_error_test 00:14:53.945 ************************************ 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.drcwiui3iG 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67235 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67235 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 67235 ']' 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:53.945 09:07:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.945 [2024-11-06 09:07:30.260851] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:14:53.945 [2024-11-06 09:07:30.261301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67235 ] 00:14:53.945 [2024-11-06 09:07:30.454871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.204 [2024-11-06 09:07:30.617065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.462 [2024-11-06 09:07:30.828418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.462 [2024-11-06 09:07:30.828518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 BaseBdev1_malloc 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 true 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 [2024-11-06 09:07:31.348068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:55.030 [2024-11-06 09:07:31.348164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.030 [2024-11-06 09:07:31.348191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:55.030 [2024-11-06 09:07:31.348208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.030 [2024-11-06 09:07:31.350966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.030 [2024-11-06 09:07:31.351028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:55.030 BaseBdev1 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 BaseBdev2_malloc 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 true 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 [2024-11-06 09:07:31.403095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:55.030 [2024-11-06 09:07:31.403352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.030 [2024-11-06 09:07:31.403388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:55.030 [2024-11-06 09:07:31.403407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.030 [2024-11-06 09:07:31.406268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.030 [2024-11-06 09:07:31.406331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:55.030 BaseBdev2 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 BaseBdev3_malloc 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 true 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 [2024-11-06 09:07:31.469651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:55.030 [2024-11-06 09:07:31.469736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.030 [2024-11-06 09:07:31.469763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:55.030 [2024-11-06 09:07:31.469781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.030 [2024-11-06 09:07:31.472763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.030 [2024-11-06 09:07:31.472837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:55.030 BaseBdev3 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.030 [2024-11-06 09:07:31.481758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:55.030 [2024-11-06 09:07:31.484356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.030 [2024-11-06 09:07:31.484458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.030 [2024-11-06 09:07:31.484737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:55.030 [2024-11-06 09:07:31.484755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:55.030 [2024-11-06 09:07:31.485098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:14:55.030 [2024-11-06 09:07:31.485361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:55.030 [2024-11-06 09:07:31.485504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:55.030 [2024-11-06 09:07:31.485764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.030 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.031 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.031 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.031 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.031 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.031 "name": "raid_bdev1", 00:14:55.031 "uuid": "3f908645-e4b2-4cfb-819e-abbc34e27235", 00:14:55.031 "strip_size_kb": 64, 00:14:55.031 "state": "online", 00:14:55.031 "raid_level": "concat", 00:14:55.031 "superblock": true, 00:14:55.031 "num_base_bdevs": 3, 00:14:55.031 "num_base_bdevs_discovered": 3, 00:14:55.031 "num_base_bdevs_operational": 3, 00:14:55.031 "base_bdevs_list": [ 00:14:55.031 { 00:14:55.031 "name": "BaseBdev1", 00:14:55.031 "uuid": "648961e8-3338-51e0-b7db-b43e4f5adcf8", 00:14:55.031 "is_configured": true, 00:14:55.031 "data_offset": 2048, 00:14:55.031 "data_size": 63488 00:14:55.031 }, 00:14:55.031 { 00:14:55.031 "name": "BaseBdev2", 00:14:55.031 "uuid": "6b6e0d97-a4ee-5e45-b571-ee76b9f17b20", 00:14:55.031 "is_configured": true, 00:14:55.031 "data_offset": 2048, 00:14:55.031 "data_size": 63488 00:14:55.031 }, 00:14:55.031 { 00:14:55.031 "name": "BaseBdev3", 00:14:55.031 "uuid": "63153139-558d-5420-a7ab-140a82685139", 00:14:55.031 "is_configured": true, 00:14:55.031 "data_offset": 2048, 00:14:55.031 "data_size": 63488 00:14:55.031 } 00:14:55.031 ] 00:14:55.031 }' 00:14:55.031 09:07:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.031 09:07:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.599 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:55.599 09:07:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:55.857 [2024-11-06 09:07:32.163384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.795 "name": "raid_bdev1", 00:14:56.795 "uuid": "3f908645-e4b2-4cfb-819e-abbc34e27235", 00:14:56.795 "strip_size_kb": 64, 00:14:56.795 "state": "online", 00:14:56.795 "raid_level": "concat", 00:14:56.795 "superblock": true, 00:14:56.795 "num_base_bdevs": 3, 00:14:56.795 "num_base_bdevs_discovered": 3, 00:14:56.795 "num_base_bdevs_operational": 3, 00:14:56.795 "base_bdevs_list": [ 00:14:56.795 { 00:14:56.795 "name": "BaseBdev1", 00:14:56.795 "uuid": "648961e8-3338-51e0-b7db-b43e4f5adcf8", 00:14:56.795 "is_configured": true, 00:14:56.795 "data_offset": 2048, 00:14:56.795 "data_size": 63488 00:14:56.795 }, 00:14:56.795 { 00:14:56.795 "name": "BaseBdev2", 00:14:56.795 "uuid": "6b6e0d97-a4ee-5e45-b571-ee76b9f17b20", 00:14:56.795 "is_configured": true, 00:14:56.795 "data_offset": 2048, 00:14:56.795 "data_size": 63488 00:14:56.795 }, 00:14:56.795 { 00:14:56.795 "name": "BaseBdev3", 00:14:56.795 "uuid": "63153139-558d-5420-a7ab-140a82685139", 00:14:56.795 "is_configured": true, 00:14:56.795 "data_offset": 2048, 00:14:56.795 "data_size": 63488 00:14:56.795 } 00:14:56.795 ] 00:14:56.795 }' 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.795 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.363 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:57.363 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.363 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.363 [2024-11-06 09:07:33.613556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:57.363 [2024-11-06 09:07:33.613739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.363 [2024-11-06 09:07:33.617348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.363 [2024-11-06 09:07:33.617558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.363 [2024-11-06 09:07:33.617662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.363 [2024-11-06 09:07:33.617882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:57.363 { 00:14:57.363 "results": [ 00:14:57.363 { 00:14:57.363 "job": "raid_bdev1", 00:14:57.363 "core_mask": "0x1", 00:14:57.363 "workload": "randrw", 00:14:57.363 "percentage": 50, 00:14:57.363 "status": "finished", 00:14:57.363 "queue_depth": 1, 00:14:57.363 "io_size": 131072, 00:14:57.363 "runtime": 1.448045, 00:14:57.363 "iops": 10539.037115559255, 00:14:57.363 "mibps": 1317.3796394449068, 00:14:57.363 "io_failed": 1, 00:14:57.363 "io_timeout": 0, 00:14:57.363 "avg_latency_us": 132.45580776974302, 00:14:57.363 "min_latency_us": 37.00363636363636, 00:14:57.364 "max_latency_us": 1884.16 00:14:57.364 } 00:14:57.364 ], 00:14:57.364 "core_count": 1 00:14:57.364 } 00:14:57.364 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.364 09:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67235 00:14:57.364 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 67235 ']' 00:14:57.364 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 67235 00:14:57.364 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:14:57.364 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:57.364 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67235 00:14:57.364 killing process with pid 67235 00:14:57.364 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:57.364 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:57.364 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67235' 00:14:57.364 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 67235 00:14:57.364 [2024-11-06 09:07:33.649729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.364 09:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 67235 00:14:57.364 [2024-11-06 09:07:33.857196] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.744 09:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.drcwiui3iG 00:14:58.744 09:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:58.744 09:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:58.744 09:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:14:58.744 09:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:14:58.744 09:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:58.744 09:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:58.744 09:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:14:58.744 00:14:58.744 real 0m4.828s 00:14:58.744 user 0m6.070s 00:14:58.744 sys 0m0.604s 00:14:58.744 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:58.744 ************************************ 00:14:58.744 END TEST raid_write_error_test 00:14:58.744 ************************************ 00:14:58.744 09:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.744 09:07:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:58.744 09:07:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:14:58.744 09:07:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:58.744 09:07:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:58.744 09:07:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.744 ************************************ 00:14:58.744 START TEST raid_state_function_test 00:14:58.744 ************************************ 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:58.744 Process raid pid: 67380 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:58.744 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67380 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67380' 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67380 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:58.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 67380 ']' 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:58.745 09:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.745 [2024-11-06 09:07:35.163366] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:14:58.745 [2024-11-06 09:07:35.164518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.004 [2024-11-06 09:07:35.344452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.004 [2024-11-06 09:07:35.475506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.262 [2024-11-06 09:07:35.685289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.262 [2024-11-06 09:07:35.685329] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.829 [2024-11-06 09:07:36.106947] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.829 [2024-11-06 09:07:36.107007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.829 [2024-11-06 09:07:36.107024] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.829 [2024-11-06 09:07:36.107041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.829 [2024-11-06 09:07:36.107052] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:59.829 [2024-11-06 09:07:36.107067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.829 "name": "Existed_Raid", 00:14:59.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.829 "strip_size_kb": 0, 00:14:59.829 "state": "configuring", 00:14:59.829 "raid_level": "raid1", 00:14:59.829 "superblock": false, 00:14:59.829 "num_base_bdevs": 3, 00:14:59.829 "num_base_bdevs_discovered": 0, 00:14:59.829 "num_base_bdevs_operational": 3, 00:14:59.829 "base_bdevs_list": [ 00:14:59.829 { 00:14:59.829 "name": "BaseBdev1", 00:14:59.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.829 "is_configured": false, 00:14:59.829 "data_offset": 0, 00:14:59.829 "data_size": 0 00:14:59.829 }, 00:14:59.829 { 00:14:59.829 "name": "BaseBdev2", 00:14:59.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.829 "is_configured": false, 00:14:59.829 "data_offset": 0, 00:14:59.829 "data_size": 0 00:14:59.829 }, 00:14:59.829 { 00:14:59.829 "name": "BaseBdev3", 00:14:59.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.829 "is_configured": false, 00:14:59.829 "data_offset": 0, 00:14:59.829 "data_size": 0 00:14:59.829 } 00:14:59.829 ] 00:14:59.829 }' 00:14:59.829 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.830 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.400 [2024-11-06 09:07:36.643054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.400 [2024-11-06 09:07:36.643102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.400 [2024-11-06 09:07:36.651023] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.400 [2024-11-06 09:07:36.651088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.400 [2024-11-06 09:07:36.651103] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.400 [2024-11-06 09:07:36.651119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.400 [2024-11-06 09:07:36.651129] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.400 [2024-11-06 09:07:36.651143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.400 [2024-11-06 09:07:36.697042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.400 BaseBdev1 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.400 [ 00:15:00.400 { 00:15:00.400 "name": "BaseBdev1", 00:15:00.400 "aliases": [ 00:15:00.400 "9b69c3bc-6200-4b81-9aba-75755e71af42" 00:15:00.400 ], 00:15:00.400 "product_name": "Malloc disk", 00:15:00.400 "block_size": 512, 00:15:00.400 "num_blocks": 65536, 00:15:00.400 "uuid": "9b69c3bc-6200-4b81-9aba-75755e71af42", 00:15:00.400 "assigned_rate_limits": { 00:15:00.400 "rw_ios_per_sec": 0, 00:15:00.400 "rw_mbytes_per_sec": 0, 00:15:00.400 "r_mbytes_per_sec": 0, 00:15:00.400 "w_mbytes_per_sec": 0 00:15:00.400 }, 00:15:00.400 "claimed": true, 00:15:00.400 "claim_type": "exclusive_write", 00:15:00.400 "zoned": false, 00:15:00.400 "supported_io_types": { 00:15:00.400 "read": true, 00:15:00.400 "write": true, 00:15:00.400 "unmap": true, 00:15:00.400 "flush": true, 00:15:00.400 "reset": true, 00:15:00.400 "nvme_admin": false, 00:15:00.400 "nvme_io": false, 00:15:00.400 "nvme_io_md": false, 00:15:00.400 "write_zeroes": true, 00:15:00.400 "zcopy": true, 00:15:00.400 "get_zone_info": false, 00:15:00.400 "zone_management": false, 00:15:00.400 "zone_append": false, 00:15:00.400 "compare": false, 00:15:00.400 "compare_and_write": false, 00:15:00.400 "abort": true, 00:15:00.400 "seek_hole": false, 00:15:00.400 "seek_data": false, 00:15:00.400 "copy": true, 00:15:00.400 "nvme_iov_md": false 00:15:00.400 }, 00:15:00.400 "memory_domains": [ 00:15:00.400 { 00:15:00.400 "dma_device_id": "system", 00:15:00.400 "dma_device_type": 1 00:15:00.400 }, 00:15:00.400 { 00:15:00.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.400 "dma_device_type": 2 00:15:00.400 } 00:15:00.400 ], 00:15:00.400 "driver_specific": {} 00:15:00.400 } 00:15:00.400 ] 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.400 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.400 "name": "Existed_Raid", 00:15:00.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.400 "strip_size_kb": 0, 00:15:00.400 "state": "configuring", 00:15:00.400 "raid_level": "raid1", 00:15:00.401 "superblock": false, 00:15:00.401 "num_base_bdevs": 3, 00:15:00.401 "num_base_bdevs_discovered": 1, 00:15:00.401 "num_base_bdevs_operational": 3, 00:15:00.401 "base_bdevs_list": [ 00:15:00.401 { 00:15:00.401 "name": "BaseBdev1", 00:15:00.401 "uuid": "9b69c3bc-6200-4b81-9aba-75755e71af42", 00:15:00.401 "is_configured": true, 00:15:00.401 "data_offset": 0, 00:15:00.401 "data_size": 65536 00:15:00.401 }, 00:15:00.401 { 00:15:00.401 "name": "BaseBdev2", 00:15:00.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.401 "is_configured": false, 00:15:00.401 "data_offset": 0, 00:15:00.401 "data_size": 0 00:15:00.401 }, 00:15:00.401 { 00:15:00.401 "name": "BaseBdev3", 00:15:00.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.401 "is_configured": false, 00:15:00.401 "data_offset": 0, 00:15:00.401 "data_size": 0 00:15:00.401 } 00:15:00.401 ] 00:15:00.401 }' 00:15:00.401 09:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.401 09:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.991 [2024-11-06 09:07:37.253262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.991 [2024-11-06 09:07:37.253342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.991 [2024-11-06 09:07:37.261257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.991 [2024-11-06 09:07:37.263738] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.991 [2024-11-06 09:07:37.263801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.991 [2024-11-06 09:07:37.263832] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:00.991 [2024-11-06 09:07:37.263850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.991 "name": "Existed_Raid", 00:15:00.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.991 "strip_size_kb": 0, 00:15:00.991 "state": "configuring", 00:15:00.991 "raid_level": "raid1", 00:15:00.991 "superblock": false, 00:15:00.991 "num_base_bdevs": 3, 00:15:00.991 "num_base_bdevs_discovered": 1, 00:15:00.991 "num_base_bdevs_operational": 3, 00:15:00.991 "base_bdevs_list": [ 00:15:00.991 { 00:15:00.991 "name": "BaseBdev1", 00:15:00.991 "uuid": "9b69c3bc-6200-4b81-9aba-75755e71af42", 00:15:00.991 "is_configured": true, 00:15:00.991 "data_offset": 0, 00:15:00.991 "data_size": 65536 00:15:00.991 }, 00:15:00.991 { 00:15:00.991 "name": "BaseBdev2", 00:15:00.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.991 "is_configured": false, 00:15:00.991 "data_offset": 0, 00:15:00.991 "data_size": 0 00:15:00.991 }, 00:15:00.991 { 00:15:00.991 "name": "BaseBdev3", 00:15:00.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.991 "is_configured": false, 00:15:00.991 "data_offset": 0, 00:15:00.991 "data_size": 0 00:15:00.991 } 00:15:00.991 ] 00:15:00.991 }' 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.991 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.249 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:01.249 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.249 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.508 [2024-11-06 09:07:37.813317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.508 BaseBdev2 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.508 [ 00:15:01.508 { 00:15:01.508 "name": "BaseBdev2", 00:15:01.508 "aliases": [ 00:15:01.508 "ce97b50c-6894-4971-9ea5-cb02b5795d9a" 00:15:01.508 ], 00:15:01.508 "product_name": "Malloc disk", 00:15:01.508 "block_size": 512, 00:15:01.508 "num_blocks": 65536, 00:15:01.508 "uuid": "ce97b50c-6894-4971-9ea5-cb02b5795d9a", 00:15:01.508 "assigned_rate_limits": { 00:15:01.508 "rw_ios_per_sec": 0, 00:15:01.508 "rw_mbytes_per_sec": 0, 00:15:01.508 "r_mbytes_per_sec": 0, 00:15:01.508 "w_mbytes_per_sec": 0 00:15:01.508 }, 00:15:01.508 "claimed": true, 00:15:01.508 "claim_type": "exclusive_write", 00:15:01.508 "zoned": false, 00:15:01.508 "supported_io_types": { 00:15:01.508 "read": true, 00:15:01.508 "write": true, 00:15:01.508 "unmap": true, 00:15:01.508 "flush": true, 00:15:01.508 "reset": true, 00:15:01.508 "nvme_admin": false, 00:15:01.508 "nvme_io": false, 00:15:01.508 "nvme_io_md": false, 00:15:01.508 "write_zeroes": true, 00:15:01.508 "zcopy": true, 00:15:01.508 "get_zone_info": false, 00:15:01.508 "zone_management": false, 00:15:01.508 "zone_append": false, 00:15:01.508 "compare": false, 00:15:01.508 "compare_and_write": false, 00:15:01.508 "abort": true, 00:15:01.508 "seek_hole": false, 00:15:01.508 "seek_data": false, 00:15:01.508 "copy": true, 00:15:01.508 "nvme_iov_md": false 00:15:01.508 }, 00:15:01.508 "memory_domains": [ 00:15:01.508 { 00:15:01.508 "dma_device_id": "system", 00:15:01.508 "dma_device_type": 1 00:15:01.508 }, 00:15:01.508 { 00:15:01.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.508 "dma_device_type": 2 00:15:01.508 } 00:15:01.508 ], 00:15:01.508 "driver_specific": {} 00:15:01.508 } 00:15:01.508 ] 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.508 "name": "Existed_Raid", 00:15:01.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.508 "strip_size_kb": 0, 00:15:01.508 "state": "configuring", 00:15:01.508 "raid_level": "raid1", 00:15:01.508 "superblock": false, 00:15:01.508 "num_base_bdevs": 3, 00:15:01.508 "num_base_bdevs_discovered": 2, 00:15:01.508 "num_base_bdevs_operational": 3, 00:15:01.508 "base_bdevs_list": [ 00:15:01.508 { 00:15:01.508 "name": "BaseBdev1", 00:15:01.508 "uuid": "9b69c3bc-6200-4b81-9aba-75755e71af42", 00:15:01.508 "is_configured": true, 00:15:01.508 "data_offset": 0, 00:15:01.508 "data_size": 65536 00:15:01.508 }, 00:15:01.508 { 00:15:01.508 "name": "BaseBdev2", 00:15:01.508 "uuid": "ce97b50c-6894-4971-9ea5-cb02b5795d9a", 00:15:01.508 "is_configured": true, 00:15:01.508 "data_offset": 0, 00:15:01.508 "data_size": 65536 00:15:01.508 }, 00:15:01.508 { 00:15:01.508 "name": "BaseBdev3", 00:15:01.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.508 "is_configured": false, 00:15:01.508 "data_offset": 0, 00:15:01.508 "data_size": 0 00:15:01.508 } 00:15:01.508 ] 00:15:01.508 }' 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.508 09:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.078 [2024-11-06 09:07:38.442608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.078 [2024-11-06 09:07:38.442673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:02.078 [2024-11-06 09:07:38.442694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:02.078 [2024-11-06 09:07:38.443076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:02.078 [2024-11-06 09:07:38.443303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:02.078 [2024-11-06 09:07:38.443321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:02.078 [2024-11-06 09:07:38.443652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.078 BaseBdev3 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.078 [ 00:15:02.078 { 00:15:02.078 "name": "BaseBdev3", 00:15:02.078 "aliases": [ 00:15:02.078 "94c664db-c0f5-4169-9859-437179c211ef" 00:15:02.078 ], 00:15:02.078 "product_name": "Malloc disk", 00:15:02.078 "block_size": 512, 00:15:02.078 "num_blocks": 65536, 00:15:02.078 "uuid": "94c664db-c0f5-4169-9859-437179c211ef", 00:15:02.078 "assigned_rate_limits": { 00:15:02.078 "rw_ios_per_sec": 0, 00:15:02.078 "rw_mbytes_per_sec": 0, 00:15:02.078 "r_mbytes_per_sec": 0, 00:15:02.078 "w_mbytes_per_sec": 0 00:15:02.078 }, 00:15:02.078 "claimed": true, 00:15:02.078 "claim_type": "exclusive_write", 00:15:02.078 "zoned": false, 00:15:02.078 "supported_io_types": { 00:15:02.078 "read": true, 00:15:02.078 "write": true, 00:15:02.078 "unmap": true, 00:15:02.078 "flush": true, 00:15:02.078 "reset": true, 00:15:02.078 "nvme_admin": false, 00:15:02.078 "nvme_io": false, 00:15:02.078 "nvme_io_md": false, 00:15:02.078 "write_zeroes": true, 00:15:02.078 "zcopy": true, 00:15:02.078 "get_zone_info": false, 00:15:02.078 "zone_management": false, 00:15:02.078 "zone_append": false, 00:15:02.078 "compare": false, 00:15:02.078 "compare_and_write": false, 00:15:02.078 "abort": true, 00:15:02.078 "seek_hole": false, 00:15:02.078 "seek_data": false, 00:15:02.078 "copy": true, 00:15:02.078 "nvme_iov_md": false 00:15:02.078 }, 00:15:02.078 "memory_domains": [ 00:15:02.078 { 00:15:02.078 "dma_device_id": "system", 00:15:02.078 "dma_device_type": 1 00:15:02.078 }, 00:15:02.078 { 00:15:02.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.078 "dma_device_type": 2 00:15:02.078 } 00:15:02.078 ], 00:15:02.078 "driver_specific": {} 00:15:02.078 } 00:15:02.078 ] 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.078 "name": "Existed_Raid", 00:15:02.078 "uuid": "e5a50cc7-91cd-4857-8071-001293067cdf", 00:15:02.078 "strip_size_kb": 0, 00:15:02.078 "state": "online", 00:15:02.078 "raid_level": "raid1", 00:15:02.078 "superblock": false, 00:15:02.078 "num_base_bdevs": 3, 00:15:02.078 "num_base_bdevs_discovered": 3, 00:15:02.078 "num_base_bdevs_operational": 3, 00:15:02.078 "base_bdevs_list": [ 00:15:02.078 { 00:15:02.078 "name": "BaseBdev1", 00:15:02.078 "uuid": "9b69c3bc-6200-4b81-9aba-75755e71af42", 00:15:02.078 "is_configured": true, 00:15:02.078 "data_offset": 0, 00:15:02.078 "data_size": 65536 00:15:02.078 }, 00:15:02.078 { 00:15:02.078 "name": "BaseBdev2", 00:15:02.078 "uuid": "ce97b50c-6894-4971-9ea5-cb02b5795d9a", 00:15:02.078 "is_configured": true, 00:15:02.078 "data_offset": 0, 00:15:02.078 "data_size": 65536 00:15:02.078 }, 00:15:02.078 { 00:15:02.078 "name": "BaseBdev3", 00:15:02.078 "uuid": "94c664db-c0f5-4169-9859-437179c211ef", 00:15:02.078 "is_configured": true, 00:15:02.078 "data_offset": 0, 00:15:02.078 "data_size": 65536 00:15:02.078 } 00:15:02.078 ] 00:15:02.078 }' 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.078 09:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.647 [2024-11-06 09:07:39.019246] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:02.647 "name": "Existed_Raid", 00:15:02.647 "aliases": [ 00:15:02.647 "e5a50cc7-91cd-4857-8071-001293067cdf" 00:15:02.647 ], 00:15:02.647 "product_name": "Raid Volume", 00:15:02.647 "block_size": 512, 00:15:02.647 "num_blocks": 65536, 00:15:02.647 "uuid": "e5a50cc7-91cd-4857-8071-001293067cdf", 00:15:02.647 "assigned_rate_limits": { 00:15:02.647 "rw_ios_per_sec": 0, 00:15:02.647 "rw_mbytes_per_sec": 0, 00:15:02.647 "r_mbytes_per_sec": 0, 00:15:02.647 "w_mbytes_per_sec": 0 00:15:02.647 }, 00:15:02.647 "claimed": false, 00:15:02.647 "zoned": false, 00:15:02.647 "supported_io_types": { 00:15:02.647 "read": true, 00:15:02.647 "write": true, 00:15:02.647 "unmap": false, 00:15:02.647 "flush": false, 00:15:02.647 "reset": true, 00:15:02.647 "nvme_admin": false, 00:15:02.647 "nvme_io": false, 00:15:02.647 "nvme_io_md": false, 00:15:02.647 "write_zeroes": true, 00:15:02.647 "zcopy": false, 00:15:02.647 "get_zone_info": false, 00:15:02.647 "zone_management": false, 00:15:02.647 "zone_append": false, 00:15:02.647 "compare": false, 00:15:02.647 "compare_and_write": false, 00:15:02.647 "abort": false, 00:15:02.647 "seek_hole": false, 00:15:02.647 "seek_data": false, 00:15:02.647 "copy": false, 00:15:02.647 "nvme_iov_md": false 00:15:02.647 }, 00:15:02.647 "memory_domains": [ 00:15:02.647 { 00:15:02.647 "dma_device_id": "system", 00:15:02.647 "dma_device_type": 1 00:15:02.647 }, 00:15:02.647 { 00:15:02.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.647 "dma_device_type": 2 00:15:02.647 }, 00:15:02.647 { 00:15:02.647 "dma_device_id": "system", 00:15:02.647 "dma_device_type": 1 00:15:02.647 }, 00:15:02.647 { 00:15:02.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.647 "dma_device_type": 2 00:15:02.647 }, 00:15:02.647 { 00:15:02.647 "dma_device_id": "system", 00:15:02.647 "dma_device_type": 1 00:15:02.647 }, 00:15:02.647 { 00:15:02.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.647 "dma_device_type": 2 00:15:02.647 } 00:15:02.647 ], 00:15:02.647 "driver_specific": { 00:15:02.647 "raid": { 00:15:02.647 "uuid": "e5a50cc7-91cd-4857-8071-001293067cdf", 00:15:02.647 "strip_size_kb": 0, 00:15:02.647 "state": "online", 00:15:02.647 "raid_level": "raid1", 00:15:02.647 "superblock": false, 00:15:02.647 "num_base_bdevs": 3, 00:15:02.647 "num_base_bdevs_discovered": 3, 00:15:02.647 "num_base_bdevs_operational": 3, 00:15:02.647 "base_bdevs_list": [ 00:15:02.647 { 00:15:02.647 "name": "BaseBdev1", 00:15:02.647 "uuid": "9b69c3bc-6200-4b81-9aba-75755e71af42", 00:15:02.647 "is_configured": true, 00:15:02.647 "data_offset": 0, 00:15:02.647 "data_size": 65536 00:15:02.647 }, 00:15:02.647 { 00:15:02.647 "name": "BaseBdev2", 00:15:02.647 "uuid": "ce97b50c-6894-4971-9ea5-cb02b5795d9a", 00:15:02.647 "is_configured": true, 00:15:02.647 "data_offset": 0, 00:15:02.647 "data_size": 65536 00:15:02.647 }, 00:15:02.647 { 00:15:02.647 "name": "BaseBdev3", 00:15:02.647 "uuid": "94c664db-c0f5-4169-9859-437179c211ef", 00:15:02.647 "is_configured": true, 00:15:02.647 "data_offset": 0, 00:15:02.647 "data_size": 65536 00:15:02.647 } 00:15:02.647 ] 00:15:02.647 } 00:15:02.647 } 00:15:02.647 }' 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:02.647 BaseBdev2 00:15:02.647 BaseBdev3' 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.647 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.906 [2024-11-06 09:07:39.343001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.906 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.165 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.165 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.165 "name": "Existed_Raid", 00:15:03.165 "uuid": "e5a50cc7-91cd-4857-8071-001293067cdf", 00:15:03.165 "strip_size_kb": 0, 00:15:03.165 "state": "online", 00:15:03.165 "raid_level": "raid1", 00:15:03.165 "superblock": false, 00:15:03.165 "num_base_bdevs": 3, 00:15:03.165 "num_base_bdevs_discovered": 2, 00:15:03.165 "num_base_bdevs_operational": 2, 00:15:03.165 "base_bdevs_list": [ 00:15:03.165 { 00:15:03.165 "name": null, 00:15:03.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.165 "is_configured": false, 00:15:03.165 "data_offset": 0, 00:15:03.165 "data_size": 65536 00:15:03.165 }, 00:15:03.165 { 00:15:03.165 "name": "BaseBdev2", 00:15:03.165 "uuid": "ce97b50c-6894-4971-9ea5-cb02b5795d9a", 00:15:03.165 "is_configured": true, 00:15:03.165 "data_offset": 0, 00:15:03.165 "data_size": 65536 00:15:03.165 }, 00:15:03.165 { 00:15:03.165 "name": "BaseBdev3", 00:15:03.165 "uuid": "94c664db-c0f5-4169-9859-437179c211ef", 00:15:03.165 "is_configured": true, 00:15:03.165 "data_offset": 0, 00:15:03.165 "data_size": 65536 00:15:03.165 } 00:15:03.165 ] 00:15:03.165 }' 00:15:03.165 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.165 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.427 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:03.427 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:03.427 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.427 09:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:03.427 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.427 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.686 09:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.686 [2024-11-06 09:07:40.018402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.686 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.686 [2024-11-06 09:07:40.166033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:03.686 [2024-11-06 09:07:40.166161] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.945 [2024-11-06 09:07:40.253775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.945 [2024-11-06 09:07:40.253874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.945 [2024-11-06 09:07:40.253899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.945 BaseBdev2 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:03.945 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.946 [ 00:15:03.946 { 00:15:03.946 "name": "BaseBdev2", 00:15:03.946 "aliases": [ 00:15:03.946 "310cea74-5259-4138-bb13-54d42967b68a" 00:15:03.946 ], 00:15:03.946 "product_name": "Malloc disk", 00:15:03.946 "block_size": 512, 00:15:03.946 "num_blocks": 65536, 00:15:03.946 "uuid": "310cea74-5259-4138-bb13-54d42967b68a", 00:15:03.946 "assigned_rate_limits": { 00:15:03.946 "rw_ios_per_sec": 0, 00:15:03.946 "rw_mbytes_per_sec": 0, 00:15:03.946 "r_mbytes_per_sec": 0, 00:15:03.946 "w_mbytes_per_sec": 0 00:15:03.946 }, 00:15:03.946 "claimed": false, 00:15:03.946 "zoned": false, 00:15:03.946 "supported_io_types": { 00:15:03.946 "read": true, 00:15:03.946 "write": true, 00:15:03.946 "unmap": true, 00:15:03.946 "flush": true, 00:15:03.946 "reset": true, 00:15:03.946 "nvme_admin": false, 00:15:03.946 "nvme_io": false, 00:15:03.946 "nvme_io_md": false, 00:15:03.946 "write_zeroes": true, 00:15:03.946 "zcopy": true, 00:15:03.946 "get_zone_info": false, 00:15:03.946 "zone_management": false, 00:15:03.946 "zone_append": false, 00:15:03.946 "compare": false, 00:15:03.946 "compare_and_write": false, 00:15:03.946 "abort": true, 00:15:03.946 "seek_hole": false, 00:15:03.946 "seek_data": false, 00:15:03.946 "copy": true, 00:15:03.946 "nvme_iov_md": false 00:15:03.946 }, 00:15:03.946 "memory_domains": [ 00:15:03.946 { 00:15:03.946 "dma_device_id": "system", 00:15:03.946 "dma_device_type": 1 00:15:03.946 }, 00:15:03.946 { 00:15:03.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.946 "dma_device_type": 2 00:15:03.946 } 00:15:03.946 ], 00:15:03.946 "driver_specific": {} 00:15:03.946 } 00:15:03.946 ] 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.946 BaseBdev3 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.946 [ 00:15:03.946 { 00:15:03.946 "name": "BaseBdev3", 00:15:03.946 "aliases": [ 00:15:03.946 "69a58f85-a1cd-4051-a1a7-a92cb9e58a66" 00:15:03.946 ], 00:15:03.946 "product_name": "Malloc disk", 00:15:03.946 "block_size": 512, 00:15:03.946 "num_blocks": 65536, 00:15:03.946 "uuid": "69a58f85-a1cd-4051-a1a7-a92cb9e58a66", 00:15:03.946 "assigned_rate_limits": { 00:15:03.946 "rw_ios_per_sec": 0, 00:15:03.946 "rw_mbytes_per_sec": 0, 00:15:03.946 "r_mbytes_per_sec": 0, 00:15:03.946 "w_mbytes_per_sec": 0 00:15:03.946 }, 00:15:03.946 "claimed": false, 00:15:03.946 "zoned": false, 00:15:03.946 "supported_io_types": { 00:15:03.946 "read": true, 00:15:03.946 "write": true, 00:15:03.946 "unmap": true, 00:15:03.946 "flush": true, 00:15:03.946 "reset": true, 00:15:03.946 "nvme_admin": false, 00:15:03.946 "nvme_io": false, 00:15:03.946 "nvme_io_md": false, 00:15:03.946 "write_zeroes": true, 00:15:03.946 "zcopy": true, 00:15:03.946 "get_zone_info": false, 00:15:03.946 "zone_management": false, 00:15:03.946 "zone_append": false, 00:15:03.946 "compare": false, 00:15:03.946 "compare_and_write": false, 00:15:03.946 "abort": true, 00:15:03.946 "seek_hole": false, 00:15:03.946 "seek_data": false, 00:15:03.946 "copy": true, 00:15:03.946 "nvme_iov_md": false 00:15:03.946 }, 00:15:03.946 "memory_domains": [ 00:15:03.946 { 00:15:03.946 "dma_device_id": "system", 00:15:03.946 "dma_device_type": 1 00:15:03.946 }, 00:15:03.946 { 00:15:03.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.946 "dma_device_type": 2 00:15:03.946 } 00:15:03.946 ], 00:15:03.946 "driver_specific": {} 00:15:03.946 } 00:15:03.946 ] 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.946 [2024-11-06 09:07:40.459936] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:03.946 [2024-11-06 09:07:40.460409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:03.946 [2024-11-06 09:07:40.460451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.946 [2024-11-06 09:07:40.462956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.946 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.205 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.205 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.205 "name": "Existed_Raid", 00:15:04.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.205 "strip_size_kb": 0, 00:15:04.205 "state": "configuring", 00:15:04.205 "raid_level": "raid1", 00:15:04.205 "superblock": false, 00:15:04.205 "num_base_bdevs": 3, 00:15:04.205 "num_base_bdevs_discovered": 2, 00:15:04.205 "num_base_bdevs_operational": 3, 00:15:04.205 "base_bdevs_list": [ 00:15:04.205 { 00:15:04.205 "name": "BaseBdev1", 00:15:04.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.205 "is_configured": false, 00:15:04.205 "data_offset": 0, 00:15:04.205 "data_size": 0 00:15:04.205 }, 00:15:04.205 { 00:15:04.205 "name": "BaseBdev2", 00:15:04.205 "uuid": "310cea74-5259-4138-bb13-54d42967b68a", 00:15:04.205 "is_configured": true, 00:15:04.205 "data_offset": 0, 00:15:04.205 "data_size": 65536 00:15:04.205 }, 00:15:04.205 { 00:15:04.205 "name": "BaseBdev3", 00:15:04.205 "uuid": "69a58f85-a1cd-4051-a1a7-a92cb9e58a66", 00:15:04.205 "is_configured": true, 00:15:04.205 "data_offset": 0, 00:15:04.205 "data_size": 65536 00:15:04.205 } 00:15:04.205 ] 00:15:04.205 }' 00:15:04.205 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.205 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.466 [2024-11-06 09:07:40.988188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.466 09:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.725 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.725 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.725 "name": "Existed_Raid", 00:15:04.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.725 "strip_size_kb": 0, 00:15:04.725 "state": "configuring", 00:15:04.725 "raid_level": "raid1", 00:15:04.725 "superblock": false, 00:15:04.725 "num_base_bdevs": 3, 00:15:04.725 "num_base_bdevs_discovered": 1, 00:15:04.725 "num_base_bdevs_operational": 3, 00:15:04.725 "base_bdevs_list": [ 00:15:04.725 { 00:15:04.725 "name": "BaseBdev1", 00:15:04.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.725 "is_configured": false, 00:15:04.725 "data_offset": 0, 00:15:04.725 "data_size": 0 00:15:04.725 }, 00:15:04.725 { 00:15:04.725 "name": null, 00:15:04.725 "uuid": "310cea74-5259-4138-bb13-54d42967b68a", 00:15:04.725 "is_configured": false, 00:15:04.725 "data_offset": 0, 00:15:04.725 "data_size": 65536 00:15:04.725 }, 00:15:04.725 { 00:15:04.725 "name": "BaseBdev3", 00:15:04.725 "uuid": "69a58f85-a1cd-4051-a1a7-a92cb9e58a66", 00:15:04.725 "is_configured": true, 00:15:04.725 "data_offset": 0, 00:15:04.725 "data_size": 65536 00:15:04.725 } 00:15:04.725 ] 00:15:04.725 }' 00:15:04.725 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.725 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.984 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:04.984 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.984 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.984 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.984 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.243 [2024-11-06 09:07:41.580298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:05.243 BaseBdev1 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.243 [ 00:15:05.243 { 00:15:05.243 "name": "BaseBdev1", 00:15:05.243 "aliases": [ 00:15:05.243 "937ce180-613f-45d3-bd65-9a0645bfdcf6" 00:15:05.243 ], 00:15:05.243 "product_name": "Malloc disk", 00:15:05.243 "block_size": 512, 00:15:05.243 "num_blocks": 65536, 00:15:05.243 "uuid": "937ce180-613f-45d3-bd65-9a0645bfdcf6", 00:15:05.243 "assigned_rate_limits": { 00:15:05.243 "rw_ios_per_sec": 0, 00:15:05.243 "rw_mbytes_per_sec": 0, 00:15:05.243 "r_mbytes_per_sec": 0, 00:15:05.243 "w_mbytes_per_sec": 0 00:15:05.243 }, 00:15:05.243 "claimed": true, 00:15:05.243 "claim_type": "exclusive_write", 00:15:05.243 "zoned": false, 00:15:05.243 "supported_io_types": { 00:15:05.243 "read": true, 00:15:05.243 "write": true, 00:15:05.243 "unmap": true, 00:15:05.243 "flush": true, 00:15:05.243 "reset": true, 00:15:05.243 "nvme_admin": false, 00:15:05.243 "nvme_io": false, 00:15:05.243 "nvme_io_md": false, 00:15:05.243 "write_zeroes": true, 00:15:05.243 "zcopy": true, 00:15:05.243 "get_zone_info": false, 00:15:05.243 "zone_management": false, 00:15:05.243 "zone_append": false, 00:15:05.243 "compare": false, 00:15:05.243 "compare_and_write": false, 00:15:05.243 "abort": true, 00:15:05.243 "seek_hole": false, 00:15:05.243 "seek_data": false, 00:15:05.243 "copy": true, 00:15:05.243 "nvme_iov_md": false 00:15:05.243 }, 00:15:05.243 "memory_domains": [ 00:15:05.243 { 00:15:05.243 "dma_device_id": "system", 00:15:05.243 "dma_device_type": 1 00:15:05.243 }, 00:15:05.243 { 00:15:05.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.243 "dma_device_type": 2 00:15:05.243 } 00:15:05.243 ], 00:15:05.243 "driver_specific": {} 00:15:05.243 } 00:15:05.243 ] 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.243 "name": "Existed_Raid", 00:15:05.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.243 "strip_size_kb": 0, 00:15:05.243 "state": "configuring", 00:15:05.243 "raid_level": "raid1", 00:15:05.243 "superblock": false, 00:15:05.243 "num_base_bdevs": 3, 00:15:05.243 "num_base_bdevs_discovered": 2, 00:15:05.243 "num_base_bdevs_operational": 3, 00:15:05.243 "base_bdevs_list": [ 00:15:05.243 { 00:15:05.243 "name": "BaseBdev1", 00:15:05.243 "uuid": "937ce180-613f-45d3-bd65-9a0645bfdcf6", 00:15:05.243 "is_configured": true, 00:15:05.243 "data_offset": 0, 00:15:05.243 "data_size": 65536 00:15:05.243 }, 00:15:05.243 { 00:15:05.243 "name": null, 00:15:05.243 "uuid": "310cea74-5259-4138-bb13-54d42967b68a", 00:15:05.243 "is_configured": false, 00:15:05.243 "data_offset": 0, 00:15:05.243 "data_size": 65536 00:15:05.243 }, 00:15:05.243 { 00:15:05.243 "name": "BaseBdev3", 00:15:05.243 "uuid": "69a58f85-a1cd-4051-a1a7-a92cb9e58a66", 00:15:05.243 "is_configured": true, 00:15:05.243 "data_offset": 0, 00:15:05.243 "data_size": 65536 00:15:05.243 } 00:15:05.243 ] 00:15:05.243 }' 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.243 09:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.811 [2024-11-06 09:07:42.196487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.811 "name": "Existed_Raid", 00:15:05.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.811 "strip_size_kb": 0, 00:15:05.811 "state": "configuring", 00:15:05.811 "raid_level": "raid1", 00:15:05.811 "superblock": false, 00:15:05.811 "num_base_bdevs": 3, 00:15:05.811 "num_base_bdevs_discovered": 1, 00:15:05.811 "num_base_bdevs_operational": 3, 00:15:05.811 "base_bdevs_list": [ 00:15:05.811 { 00:15:05.811 "name": "BaseBdev1", 00:15:05.811 "uuid": "937ce180-613f-45d3-bd65-9a0645bfdcf6", 00:15:05.811 "is_configured": true, 00:15:05.811 "data_offset": 0, 00:15:05.811 "data_size": 65536 00:15:05.811 }, 00:15:05.811 { 00:15:05.811 "name": null, 00:15:05.811 "uuid": "310cea74-5259-4138-bb13-54d42967b68a", 00:15:05.811 "is_configured": false, 00:15:05.811 "data_offset": 0, 00:15:05.811 "data_size": 65536 00:15:05.811 }, 00:15:05.811 { 00:15:05.811 "name": null, 00:15:05.811 "uuid": "69a58f85-a1cd-4051-a1a7-a92cb9e58a66", 00:15:05.811 "is_configured": false, 00:15:05.811 "data_offset": 0, 00:15:05.811 "data_size": 65536 00:15:05.811 } 00:15:05.811 ] 00:15:05.811 }' 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.811 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.393 [2024-11-06 09:07:42.780708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.393 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.394 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.394 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.394 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.394 "name": "Existed_Raid", 00:15:06.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.394 "strip_size_kb": 0, 00:15:06.394 "state": "configuring", 00:15:06.394 "raid_level": "raid1", 00:15:06.394 "superblock": false, 00:15:06.394 "num_base_bdevs": 3, 00:15:06.394 "num_base_bdevs_discovered": 2, 00:15:06.394 "num_base_bdevs_operational": 3, 00:15:06.394 "base_bdevs_list": [ 00:15:06.394 { 00:15:06.394 "name": "BaseBdev1", 00:15:06.394 "uuid": "937ce180-613f-45d3-bd65-9a0645bfdcf6", 00:15:06.394 "is_configured": true, 00:15:06.394 "data_offset": 0, 00:15:06.394 "data_size": 65536 00:15:06.394 }, 00:15:06.394 { 00:15:06.394 "name": null, 00:15:06.394 "uuid": "310cea74-5259-4138-bb13-54d42967b68a", 00:15:06.394 "is_configured": false, 00:15:06.394 "data_offset": 0, 00:15:06.394 "data_size": 65536 00:15:06.394 }, 00:15:06.394 { 00:15:06.394 "name": "BaseBdev3", 00:15:06.394 "uuid": "69a58f85-a1cd-4051-a1a7-a92cb9e58a66", 00:15:06.394 "is_configured": true, 00:15:06.394 "data_offset": 0, 00:15:06.394 "data_size": 65536 00:15:06.394 } 00:15:06.394 ] 00:15:06.394 }' 00:15:06.394 09:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.394 09:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.960 [2024-11-06 09:07:43.352960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.960 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.961 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.961 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.961 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.961 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.961 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.961 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.961 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.219 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.219 "name": "Existed_Raid", 00:15:07.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.219 "strip_size_kb": 0, 00:15:07.219 "state": "configuring", 00:15:07.219 "raid_level": "raid1", 00:15:07.219 "superblock": false, 00:15:07.219 "num_base_bdevs": 3, 00:15:07.219 "num_base_bdevs_discovered": 1, 00:15:07.219 "num_base_bdevs_operational": 3, 00:15:07.219 "base_bdevs_list": [ 00:15:07.219 { 00:15:07.219 "name": null, 00:15:07.219 "uuid": "937ce180-613f-45d3-bd65-9a0645bfdcf6", 00:15:07.219 "is_configured": false, 00:15:07.219 "data_offset": 0, 00:15:07.219 "data_size": 65536 00:15:07.219 }, 00:15:07.219 { 00:15:07.219 "name": null, 00:15:07.219 "uuid": "310cea74-5259-4138-bb13-54d42967b68a", 00:15:07.219 "is_configured": false, 00:15:07.219 "data_offset": 0, 00:15:07.219 "data_size": 65536 00:15:07.219 }, 00:15:07.219 { 00:15:07.219 "name": "BaseBdev3", 00:15:07.219 "uuid": "69a58f85-a1cd-4051-a1a7-a92cb9e58a66", 00:15:07.219 "is_configured": true, 00:15:07.219 "data_offset": 0, 00:15:07.219 "data_size": 65536 00:15:07.219 } 00:15:07.219 ] 00:15:07.219 }' 00:15:07.219 09:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.219 09:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.477 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.477 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:07.477 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.477 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.736 [2024-11-06 09:07:44.058350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.736 "name": "Existed_Raid", 00:15:07.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.736 "strip_size_kb": 0, 00:15:07.736 "state": "configuring", 00:15:07.736 "raid_level": "raid1", 00:15:07.736 "superblock": false, 00:15:07.736 "num_base_bdevs": 3, 00:15:07.736 "num_base_bdevs_discovered": 2, 00:15:07.736 "num_base_bdevs_operational": 3, 00:15:07.736 "base_bdevs_list": [ 00:15:07.736 { 00:15:07.736 "name": null, 00:15:07.736 "uuid": "937ce180-613f-45d3-bd65-9a0645bfdcf6", 00:15:07.736 "is_configured": false, 00:15:07.736 "data_offset": 0, 00:15:07.736 "data_size": 65536 00:15:07.736 }, 00:15:07.736 { 00:15:07.736 "name": "BaseBdev2", 00:15:07.736 "uuid": "310cea74-5259-4138-bb13-54d42967b68a", 00:15:07.736 "is_configured": true, 00:15:07.736 "data_offset": 0, 00:15:07.736 "data_size": 65536 00:15:07.736 }, 00:15:07.736 { 00:15:07.736 "name": "BaseBdev3", 00:15:07.736 "uuid": "69a58f85-a1cd-4051-a1a7-a92cb9e58a66", 00:15:07.736 "is_configured": true, 00:15:07.736 "data_offset": 0, 00:15:07.736 "data_size": 65536 00:15:07.736 } 00:15:07.736 ] 00:15:07.736 }' 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.736 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 937ce180-613f-45d3-bd65-9a0645bfdcf6 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.303 [2024-11-06 09:07:44.778243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:08.303 [2024-11-06 09:07:44.778328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:08.303 [2024-11-06 09:07:44.778341] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:08.303 [2024-11-06 09:07:44.778687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:08.303 [2024-11-06 09:07:44.778920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:08.303 [2024-11-06 09:07:44.778945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:08.303 [2024-11-06 09:07:44.779313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.303 NewBaseBdev 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.303 [ 00:15:08.303 { 00:15:08.303 "name": "NewBaseBdev", 00:15:08.303 "aliases": [ 00:15:08.303 "937ce180-613f-45d3-bd65-9a0645bfdcf6" 00:15:08.303 ], 00:15:08.303 "product_name": "Malloc disk", 00:15:08.303 "block_size": 512, 00:15:08.303 "num_blocks": 65536, 00:15:08.303 "uuid": "937ce180-613f-45d3-bd65-9a0645bfdcf6", 00:15:08.303 "assigned_rate_limits": { 00:15:08.303 "rw_ios_per_sec": 0, 00:15:08.303 "rw_mbytes_per_sec": 0, 00:15:08.303 "r_mbytes_per_sec": 0, 00:15:08.303 "w_mbytes_per_sec": 0 00:15:08.303 }, 00:15:08.303 "claimed": true, 00:15:08.303 "claim_type": "exclusive_write", 00:15:08.303 "zoned": false, 00:15:08.303 "supported_io_types": { 00:15:08.303 "read": true, 00:15:08.303 "write": true, 00:15:08.303 "unmap": true, 00:15:08.303 "flush": true, 00:15:08.303 "reset": true, 00:15:08.303 "nvme_admin": false, 00:15:08.303 "nvme_io": false, 00:15:08.303 "nvme_io_md": false, 00:15:08.303 "write_zeroes": true, 00:15:08.303 "zcopy": true, 00:15:08.303 "get_zone_info": false, 00:15:08.303 "zone_management": false, 00:15:08.303 "zone_append": false, 00:15:08.303 "compare": false, 00:15:08.303 "compare_and_write": false, 00:15:08.303 "abort": true, 00:15:08.303 "seek_hole": false, 00:15:08.303 "seek_data": false, 00:15:08.303 "copy": true, 00:15:08.303 "nvme_iov_md": false 00:15:08.303 }, 00:15:08.303 "memory_domains": [ 00:15:08.303 { 00:15:08.303 "dma_device_id": "system", 00:15:08.303 "dma_device_type": 1 00:15:08.303 }, 00:15:08.303 { 00:15:08.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.303 "dma_device_type": 2 00:15:08.303 } 00:15:08.303 ], 00:15:08.303 "driver_specific": {} 00:15:08.303 } 00:15:08.303 ] 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.303 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.304 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.304 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.304 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.304 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.304 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.304 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.304 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.304 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.304 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.304 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.562 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.562 "name": "Existed_Raid", 00:15:08.562 "uuid": "8c4b6242-b759-4fb1-a641-c79b5110fe12", 00:15:08.562 "strip_size_kb": 0, 00:15:08.562 "state": "online", 00:15:08.562 "raid_level": "raid1", 00:15:08.562 "superblock": false, 00:15:08.562 "num_base_bdevs": 3, 00:15:08.562 "num_base_bdevs_discovered": 3, 00:15:08.562 "num_base_bdevs_operational": 3, 00:15:08.562 "base_bdevs_list": [ 00:15:08.562 { 00:15:08.562 "name": "NewBaseBdev", 00:15:08.562 "uuid": "937ce180-613f-45d3-bd65-9a0645bfdcf6", 00:15:08.562 "is_configured": true, 00:15:08.562 "data_offset": 0, 00:15:08.562 "data_size": 65536 00:15:08.562 }, 00:15:08.562 { 00:15:08.562 "name": "BaseBdev2", 00:15:08.562 "uuid": "310cea74-5259-4138-bb13-54d42967b68a", 00:15:08.562 "is_configured": true, 00:15:08.562 "data_offset": 0, 00:15:08.562 "data_size": 65536 00:15:08.562 }, 00:15:08.562 { 00:15:08.562 "name": "BaseBdev3", 00:15:08.562 "uuid": "69a58f85-a1cd-4051-a1a7-a92cb9e58a66", 00:15:08.562 "is_configured": true, 00:15:08.562 "data_offset": 0, 00:15:08.562 "data_size": 65536 00:15:08.562 } 00:15:08.562 ] 00:15:08.562 }' 00:15:08.562 09:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.562 09:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.820 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:08.820 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:08.820 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.820 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.820 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.820 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.820 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:08.820 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.820 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.820 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.820 [2024-11-06 09:07:45.330908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.820 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:09.079 "name": "Existed_Raid", 00:15:09.079 "aliases": [ 00:15:09.079 "8c4b6242-b759-4fb1-a641-c79b5110fe12" 00:15:09.079 ], 00:15:09.079 "product_name": "Raid Volume", 00:15:09.079 "block_size": 512, 00:15:09.079 "num_blocks": 65536, 00:15:09.079 "uuid": "8c4b6242-b759-4fb1-a641-c79b5110fe12", 00:15:09.079 "assigned_rate_limits": { 00:15:09.079 "rw_ios_per_sec": 0, 00:15:09.079 "rw_mbytes_per_sec": 0, 00:15:09.079 "r_mbytes_per_sec": 0, 00:15:09.079 "w_mbytes_per_sec": 0 00:15:09.079 }, 00:15:09.079 "claimed": false, 00:15:09.079 "zoned": false, 00:15:09.079 "supported_io_types": { 00:15:09.079 "read": true, 00:15:09.079 "write": true, 00:15:09.079 "unmap": false, 00:15:09.079 "flush": false, 00:15:09.079 "reset": true, 00:15:09.079 "nvme_admin": false, 00:15:09.079 "nvme_io": false, 00:15:09.079 "nvme_io_md": false, 00:15:09.079 "write_zeroes": true, 00:15:09.079 "zcopy": false, 00:15:09.079 "get_zone_info": false, 00:15:09.079 "zone_management": false, 00:15:09.079 "zone_append": false, 00:15:09.079 "compare": false, 00:15:09.079 "compare_and_write": false, 00:15:09.079 "abort": false, 00:15:09.079 "seek_hole": false, 00:15:09.079 "seek_data": false, 00:15:09.079 "copy": false, 00:15:09.079 "nvme_iov_md": false 00:15:09.079 }, 00:15:09.079 "memory_domains": [ 00:15:09.079 { 00:15:09.079 "dma_device_id": "system", 00:15:09.079 "dma_device_type": 1 00:15:09.079 }, 00:15:09.079 { 00:15:09.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.079 "dma_device_type": 2 00:15:09.079 }, 00:15:09.079 { 00:15:09.079 "dma_device_id": "system", 00:15:09.079 "dma_device_type": 1 00:15:09.079 }, 00:15:09.079 { 00:15:09.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.079 "dma_device_type": 2 00:15:09.079 }, 00:15:09.079 { 00:15:09.079 "dma_device_id": "system", 00:15:09.079 "dma_device_type": 1 00:15:09.079 }, 00:15:09.079 { 00:15:09.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.079 "dma_device_type": 2 00:15:09.079 } 00:15:09.079 ], 00:15:09.079 "driver_specific": { 00:15:09.079 "raid": { 00:15:09.079 "uuid": "8c4b6242-b759-4fb1-a641-c79b5110fe12", 00:15:09.079 "strip_size_kb": 0, 00:15:09.079 "state": "online", 00:15:09.079 "raid_level": "raid1", 00:15:09.079 "superblock": false, 00:15:09.079 "num_base_bdevs": 3, 00:15:09.079 "num_base_bdevs_discovered": 3, 00:15:09.079 "num_base_bdevs_operational": 3, 00:15:09.079 "base_bdevs_list": [ 00:15:09.079 { 00:15:09.079 "name": "NewBaseBdev", 00:15:09.079 "uuid": "937ce180-613f-45d3-bd65-9a0645bfdcf6", 00:15:09.079 "is_configured": true, 00:15:09.079 "data_offset": 0, 00:15:09.079 "data_size": 65536 00:15:09.079 }, 00:15:09.079 { 00:15:09.079 "name": "BaseBdev2", 00:15:09.079 "uuid": "310cea74-5259-4138-bb13-54d42967b68a", 00:15:09.079 "is_configured": true, 00:15:09.079 "data_offset": 0, 00:15:09.079 "data_size": 65536 00:15:09.079 }, 00:15:09.079 { 00:15:09.079 "name": "BaseBdev3", 00:15:09.079 "uuid": "69a58f85-a1cd-4051-a1a7-a92cb9e58a66", 00:15:09.079 "is_configured": true, 00:15:09.079 "data_offset": 0, 00:15:09.079 "data_size": 65536 00:15:09.079 } 00:15:09.079 ] 00:15:09.079 } 00:15:09.079 } 00:15:09.079 }' 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:09.079 BaseBdev2 00:15:09.079 BaseBdev3' 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.079 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.338 [2024-11-06 09:07:45.654606] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:09.338 [2024-11-06 09:07:45.654650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.338 [2024-11-06 09:07:45.654755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.338 [2024-11-06 09:07:45.655134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.338 [2024-11-06 09:07:45.655154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67380 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 67380 ']' 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 67380 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67380 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67380' 00:15:09.338 killing process with pid 67380 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 67380 00:15:09.338 [2024-11-06 09:07:45.694306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.338 09:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 67380 00:15:09.596 [2024-11-06 09:07:45.970417] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:10.531 00:15:10.531 real 0m11.991s 00:15:10.531 user 0m19.912s 00:15:10.531 sys 0m1.672s 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.531 ************************************ 00:15:10.531 END TEST raid_state_function_test 00:15:10.531 ************************************ 00:15:10.531 09:07:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:15:10.531 09:07:47 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:10.531 09:07:47 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:10.531 09:07:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:10.531 ************************************ 00:15:10.531 START TEST raid_state_function_test_sb 00:15:10.531 ************************************ 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:10.531 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68018 00:15:10.790 Process raid pid: 68018 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68018' 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68018 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 68018 ']' 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:10.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:10.790 09:07:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.790 [2024-11-06 09:07:47.157464] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:15:10.790 [2024-11-06 09:07:47.157637] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.049 [2024-11-06 09:07:47.333222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.049 [2024-11-06 09:07:47.465710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.308 [2024-11-06 09:07:47.679540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.308 [2024-11-06 09:07:47.679869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:11.875 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:11.875 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:11.875 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:11.875 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.875 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.875 [2024-11-06 09:07:48.201195] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:11.875 [2024-11-06 09:07:48.201262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:11.875 [2024-11-06 09:07:48.201280] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.875 [2024-11-06 09:07:48.201297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.875 [2024-11-06 09:07:48.201307] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:11.875 [2024-11-06 09:07:48.201321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:11.875 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.876 "name": "Existed_Raid", 00:15:11.876 "uuid": "14ebcb66-8678-4de3-9f64-61dd7e295889", 00:15:11.876 "strip_size_kb": 0, 00:15:11.876 "state": "configuring", 00:15:11.876 "raid_level": "raid1", 00:15:11.876 "superblock": true, 00:15:11.876 "num_base_bdevs": 3, 00:15:11.876 "num_base_bdevs_discovered": 0, 00:15:11.876 "num_base_bdevs_operational": 3, 00:15:11.876 "base_bdevs_list": [ 00:15:11.876 { 00:15:11.876 "name": "BaseBdev1", 00:15:11.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.876 "is_configured": false, 00:15:11.876 "data_offset": 0, 00:15:11.876 "data_size": 0 00:15:11.876 }, 00:15:11.876 { 00:15:11.876 "name": "BaseBdev2", 00:15:11.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.876 "is_configured": false, 00:15:11.876 "data_offset": 0, 00:15:11.876 "data_size": 0 00:15:11.876 }, 00:15:11.876 { 00:15:11.876 "name": "BaseBdev3", 00:15:11.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.876 "is_configured": false, 00:15:11.876 "data_offset": 0, 00:15:11.876 "data_size": 0 00:15:11.876 } 00:15:11.876 ] 00:15:11.876 }' 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.876 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.443 [2024-11-06 09:07:48.717247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.443 [2024-11-06 09:07:48.717290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.443 [2024-11-06 09:07:48.725241] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:12.443 [2024-11-06 09:07:48.725301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:12.443 [2024-11-06 09:07:48.725317] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.443 [2024-11-06 09:07:48.725333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.443 [2024-11-06 09:07:48.725343] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:12.443 [2024-11-06 09:07:48.725357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.443 [2024-11-06 09:07:48.770460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.443 BaseBdev1 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.443 [ 00:15:12.443 { 00:15:12.443 "name": "BaseBdev1", 00:15:12.443 "aliases": [ 00:15:12.443 "000ea749-6bfe-413a-a164-1e293c20b535" 00:15:12.443 ], 00:15:12.443 "product_name": "Malloc disk", 00:15:12.443 "block_size": 512, 00:15:12.443 "num_blocks": 65536, 00:15:12.443 "uuid": "000ea749-6bfe-413a-a164-1e293c20b535", 00:15:12.443 "assigned_rate_limits": { 00:15:12.443 "rw_ios_per_sec": 0, 00:15:12.443 "rw_mbytes_per_sec": 0, 00:15:12.443 "r_mbytes_per_sec": 0, 00:15:12.443 "w_mbytes_per_sec": 0 00:15:12.443 }, 00:15:12.443 "claimed": true, 00:15:12.443 "claim_type": "exclusive_write", 00:15:12.443 "zoned": false, 00:15:12.443 "supported_io_types": { 00:15:12.443 "read": true, 00:15:12.443 "write": true, 00:15:12.443 "unmap": true, 00:15:12.443 "flush": true, 00:15:12.443 "reset": true, 00:15:12.443 "nvme_admin": false, 00:15:12.443 "nvme_io": false, 00:15:12.443 "nvme_io_md": false, 00:15:12.443 "write_zeroes": true, 00:15:12.443 "zcopy": true, 00:15:12.443 "get_zone_info": false, 00:15:12.443 "zone_management": false, 00:15:12.443 "zone_append": false, 00:15:12.443 "compare": false, 00:15:12.443 "compare_and_write": false, 00:15:12.443 "abort": true, 00:15:12.443 "seek_hole": false, 00:15:12.443 "seek_data": false, 00:15:12.443 "copy": true, 00:15:12.443 "nvme_iov_md": false 00:15:12.443 }, 00:15:12.443 "memory_domains": [ 00:15:12.443 { 00:15:12.443 "dma_device_id": "system", 00:15:12.443 "dma_device_type": 1 00:15:12.443 }, 00:15:12.443 { 00:15:12.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.443 "dma_device_type": 2 00:15:12.443 } 00:15:12.443 ], 00:15:12.443 "driver_specific": {} 00:15:12.443 } 00:15:12.443 ] 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.443 "name": "Existed_Raid", 00:15:12.443 "uuid": "938120a6-6a94-4d0e-b3f5-acd19fea25eb", 00:15:12.443 "strip_size_kb": 0, 00:15:12.443 "state": "configuring", 00:15:12.443 "raid_level": "raid1", 00:15:12.443 "superblock": true, 00:15:12.443 "num_base_bdevs": 3, 00:15:12.443 "num_base_bdevs_discovered": 1, 00:15:12.443 "num_base_bdevs_operational": 3, 00:15:12.443 "base_bdevs_list": [ 00:15:12.443 { 00:15:12.443 "name": "BaseBdev1", 00:15:12.443 "uuid": "000ea749-6bfe-413a-a164-1e293c20b535", 00:15:12.443 "is_configured": true, 00:15:12.443 "data_offset": 2048, 00:15:12.443 "data_size": 63488 00:15:12.443 }, 00:15:12.443 { 00:15:12.443 "name": "BaseBdev2", 00:15:12.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.443 "is_configured": false, 00:15:12.443 "data_offset": 0, 00:15:12.443 "data_size": 0 00:15:12.443 }, 00:15:12.443 { 00:15:12.443 "name": "BaseBdev3", 00:15:12.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.443 "is_configured": false, 00:15:12.443 "data_offset": 0, 00:15:12.443 "data_size": 0 00:15:12.443 } 00:15:12.443 ] 00:15:12.443 }' 00:15:12.443 09:07:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.444 09:07:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.009 [2024-11-06 09:07:49.330676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.009 [2024-11-06 09:07:49.330895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.009 [2024-11-06 09:07:49.342774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.009 [2024-11-06 09:07:49.345313] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:13.009 [2024-11-06 09:07:49.345489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:13.009 [2024-11-06 09:07:49.345615] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:13.009 [2024-11-06 09:07:49.345751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.009 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.009 "name": "Existed_Raid", 00:15:13.009 "uuid": "ddae8038-73f2-4749-83b6-9a03beb54453", 00:15:13.009 "strip_size_kb": 0, 00:15:13.010 "state": "configuring", 00:15:13.010 "raid_level": "raid1", 00:15:13.010 "superblock": true, 00:15:13.010 "num_base_bdevs": 3, 00:15:13.010 "num_base_bdevs_discovered": 1, 00:15:13.010 "num_base_bdevs_operational": 3, 00:15:13.010 "base_bdevs_list": [ 00:15:13.010 { 00:15:13.010 "name": "BaseBdev1", 00:15:13.010 "uuid": "000ea749-6bfe-413a-a164-1e293c20b535", 00:15:13.010 "is_configured": true, 00:15:13.010 "data_offset": 2048, 00:15:13.010 "data_size": 63488 00:15:13.010 }, 00:15:13.010 { 00:15:13.010 "name": "BaseBdev2", 00:15:13.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.010 "is_configured": false, 00:15:13.010 "data_offset": 0, 00:15:13.010 "data_size": 0 00:15:13.010 }, 00:15:13.010 { 00:15:13.010 "name": "BaseBdev3", 00:15:13.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.010 "is_configured": false, 00:15:13.010 "data_offset": 0, 00:15:13.010 "data_size": 0 00:15:13.010 } 00:15:13.010 ] 00:15:13.010 }' 00:15:13.010 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.010 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.575 [2024-11-06 09:07:49.889336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.575 BaseBdev2 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.575 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.576 [ 00:15:13.576 { 00:15:13.576 "name": "BaseBdev2", 00:15:13.576 "aliases": [ 00:15:13.576 "817c2f73-b812-4727-abd3-1ba8c845ba5c" 00:15:13.576 ], 00:15:13.576 "product_name": "Malloc disk", 00:15:13.576 "block_size": 512, 00:15:13.576 "num_blocks": 65536, 00:15:13.576 "uuid": "817c2f73-b812-4727-abd3-1ba8c845ba5c", 00:15:13.576 "assigned_rate_limits": { 00:15:13.576 "rw_ios_per_sec": 0, 00:15:13.576 "rw_mbytes_per_sec": 0, 00:15:13.576 "r_mbytes_per_sec": 0, 00:15:13.576 "w_mbytes_per_sec": 0 00:15:13.576 }, 00:15:13.576 "claimed": true, 00:15:13.576 "claim_type": "exclusive_write", 00:15:13.576 "zoned": false, 00:15:13.576 "supported_io_types": { 00:15:13.576 "read": true, 00:15:13.576 "write": true, 00:15:13.576 "unmap": true, 00:15:13.576 "flush": true, 00:15:13.576 "reset": true, 00:15:13.576 "nvme_admin": false, 00:15:13.576 "nvme_io": false, 00:15:13.576 "nvme_io_md": false, 00:15:13.576 "write_zeroes": true, 00:15:13.576 "zcopy": true, 00:15:13.576 "get_zone_info": false, 00:15:13.576 "zone_management": false, 00:15:13.576 "zone_append": false, 00:15:13.576 "compare": false, 00:15:13.576 "compare_and_write": false, 00:15:13.576 "abort": true, 00:15:13.576 "seek_hole": false, 00:15:13.576 "seek_data": false, 00:15:13.576 "copy": true, 00:15:13.576 "nvme_iov_md": false 00:15:13.576 }, 00:15:13.576 "memory_domains": [ 00:15:13.576 { 00:15:13.576 "dma_device_id": "system", 00:15:13.576 "dma_device_type": 1 00:15:13.576 }, 00:15:13.576 { 00:15:13.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.576 "dma_device_type": 2 00:15:13.576 } 00:15:13.576 ], 00:15:13.576 "driver_specific": {} 00:15:13.576 } 00:15:13.576 ] 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.576 "name": "Existed_Raid", 00:15:13.576 "uuid": "ddae8038-73f2-4749-83b6-9a03beb54453", 00:15:13.576 "strip_size_kb": 0, 00:15:13.576 "state": "configuring", 00:15:13.576 "raid_level": "raid1", 00:15:13.576 "superblock": true, 00:15:13.576 "num_base_bdevs": 3, 00:15:13.576 "num_base_bdevs_discovered": 2, 00:15:13.576 "num_base_bdevs_operational": 3, 00:15:13.576 "base_bdevs_list": [ 00:15:13.576 { 00:15:13.576 "name": "BaseBdev1", 00:15:13.576 "uuid": "000ea749-6bfe-413a-a164-1e293c20b535", 00:15:13.576 "is_configured": true, 00:15:13.576 "data_offset": 2048, 00:15:13.576 "data_size": 63488 00:15:13.576 }, 00:15:13.576 { 00:15:13.576 "name": "BaseBdev2", 00:15:13.576 "uuid": "817c2f73-b812-4727-abd3-1ba8c845ba5c", 00:15:13.576 "is_configured": true, 00:15:13.576 "data_offset": 2048, 00:15:13.576 "data_size": 63488 00:15:13.576 }, 00:15:13.576 { 00:15:13.576 "name": "BaseBdev3", 00:15:13.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.576 "is_configured": false, 00:15:13.576 "data_offset": 0, 00:15:13.576 "data_size": 0 00:15:13.576 } 00:15:13.576 ] 00:15:13.576 }' 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.576 09:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.143 [2024-11-06 09:07:50.484036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.143 [2024-11-06 09:07:50.484372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:14.143 [2024-11-06 09:07:50.484406] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:14.143 BaseBdev3 00:15:14.143 [2024-11-06 09:07:50.484754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:14.143 [2024-11-06 09:07:50.484980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:14.143 [2024-11-06 09:07:50.485006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:14.143 [2024-11-06 09:07:50.485193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.143 [ 00:15:14.143 { 00:15:14.143 "name": "BaseBdev3", 00:15:14.143 "aliases": [ 00:15:14.143 "18ebe296-ceb9-4129-aea7-759bdcec372a" 00:15:14.143 ], 00:15:14.143 "product_name": "Malloc disk", 00:15:14.143 "block_size": 512, 00:15:14.143 "num_blocks": 65536, 00:15:14.143 "uuid": "18ebe296-ceb9-4129-aea7-759bdcec372a", 00:15:14.143 "assigned_rate_limits": { 00:15:14.143 "rw_ios_per_sec": 0, 00:15:14.143 "rw_mbytes_per_sec": 0, 00:15:14.143 "r_mbytes_per_sec": 0, 00:15:14.143 "w_mbytes_per_sec": 0 00:15:14.143 }, 00:15:14.143 "claimed": true, 00:15:14.143 "claim_type": "exclusive_write", 00:15:14.143 "zoned": false, 00:15:14.143 "supported_io_types": { 00:15:14.143 "read": true, 00:15:14.143 "write": true, 00:15:14.143 "unmap": true, 00:15:14.143 "flush": true, 00:15:14.143 "reset": true, 00:15:14.143 "nvme_admin": false, 00:15:14.143 "nvme_io": false, 00:15:14.143 "nvme_io_md": false, 00:15:14.143 "write_zeroes": true, 00:15:14.143 "zcopy": true, 00:15:14.143 "get_zone_info": false, 00:15:14.143 "zone_management": false, 00:15:14.143 "zone_append": false, 00:15:14.143 "compare": false, 00:15:14.143 "compare_and_write": false, 00:15:14.143 "abort": true, 00:15:14.143 "seek_hole": false, 00:15:14.143 "seek_data": false, 00:15:14.143 "copy": true, 00:15:14.143 "nvme_iov_md": false 00:15:14.143 }, 00:15:14.143 "memory_domains": [ 00:15:14.143 { 00:15:14.143 "dma_device_id": "system", 00:15:14.143 "dma_device_type": 1 00:15:14.143 }, 00:15:14.143 { 00:15:14.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.143 "dma_device_type": 2 00:15:14.143 } 00:15:14.143 ], 00:15:14.143 "driver_specific": {} 00:15:14.143 } 00:15:14.143 ] 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.143 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.143 "name": "Existed_Raid", 00:15:14.143 "uuid": "ddae8038-73f2-4749-83b6-9a03beb54453", 00:15:14.143 "strip_size_kb": 0, 00:15:14.143 "state": "online", 00:15:14.143 "raid_level": "raid1", 00:15:14.143 "superblock": true, 00:15:14.143 "num_base_bdevs": 3, 00:15:14.143 "num_base_bdevs_discovered": 3, 00:15:14.143 "num_base_bdevs_operational": 3, 00:15:14.143 "base_bdevs_list": [ 00:15:14.143 { 00:15:14.143 "name": "BaseBdev1", 00:15:14.143 "uuid": "000ea749-6bfe-413a-a164-1e293c20b535", 00:15:14.143 "is_configured": true, 00:15:14.143 "data_offset": 2048, 00:15:14.143 "data_size": 63488 00:15:14.143 }, 00:15:14.143 { 00:15:14.143 "name": "BaseBdev2", 00:15:14.144 "uuid": "817c2f73-b812-4727-abd3-1ba8c845ba5c", 00:15:14.144 "is_configured": true, 00:15:14.144 "data_offset": 2048, 00:15:14.144 "data_size": 63488 00:15:14.144 }, 00:15:14.144 { 00:15:14.144 "name": "BaseBdev3", 00:15:14.144 "uuid": "18ebe296-ceb9-4129-aea7-759bdcec372a", 00:15:14.144 "is_configured": true, 00:15:14.144 "data_offset": 2048, 00:15:14.144 "data_size": 63488 00:15:14.144 } 00:15:14.144 ] 00:15:14.144 }' 00:15:14.144 09:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.144 09:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.709 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:14.709 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:14.709 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:14.709 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:14.709 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:14.709 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:14.709 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:14.709 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:14.709 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.709 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.709 [2024-11-06 09:07:51.036653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.709 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.709 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:14.709 "name": "Existed_Raid", 00:15:14.709 "aliases": [ 00:15:14.709 "ddae8038-73f2-4749-83b6-9a03beb54453" 00:15:14.709 ], 00:15:14.709 "product_name": "Raid Volume", 00:15:14.709 "block_size": 512, 00:15:14.709 "num_blocks": 63488, 00:15:14.709 "uuid": "ddae8038-73f2-4749-83b6-9a03beb54453", 00:15:14.709 "assigned_rate_limits": { 00:15:14.709 "rw_ios_per_sec": 0, 00:15:14.709 "rw_mbytes_per_sec": 0, 00:15:14.709 "r_mbytes_per_sec": 0, 00:15:14.709 "w_mbytes_per_sec": 0 00:15:14.709 }, 00:15:14.709 "claimed": false, 00:15:14.709 "zoned": false, 00:15:14.709 "supported_io_types": { 00:15:14.709 "read": true, 00:15:14.709 "write": true, 00:15:14.709 "unmap": false, 00:15:14.709 "flush": false, 00:15:14.709 "reset": true, 00:15:14.709 "nvme_admin": false, 00:15:14.709 "nvme_io": false, 00:15:14.709 "nvme_io_md": false, 00:15:14.709 "write_zeroes": true, 00:15:14.709 "zcopy": false, 00:15:14.709 "get_zone_info": false, 00:15:14.709 "zone_management": false, 00:15:14.709 "zone_append": false, 00:15:14.709 "compare": false, 00:15:14.709 "compare_and_write": false, 00:15:14.709 "abort": false, 00:15:14.709 "seek_hole": false, 00:15:14.709 "seek_data": false, 00:15:14.709 "copy": false, 00:15:14.709 "nvme_iov_md": false 00:15:14.709 }, 00:15:14.709 "memory_domains": [ 00:15:14.709 { 00:15:14.709 "dma_device_id": "system", 00:15:14.709 "dma_device_type": 1 00:15:14.709 }, 00:15:14.709 { 00:15:14.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.709 "dma_device_type": 2 00:15:14.709 }, 00:15:14.709 { 00:15:14.709 "dma_device_id": "system", 00:15:14.709 "dma_device_type": 1 00:15:14.709 }, 00:15:14.709 { 00:15:14.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.709 "dma_device_type": 2 00:15:14.709 }, 00:15:14.709 { 00:15:14.709 "dma_device_id": "system", 00:15:14.709 "dma_device_type": 1 00:15:14.709 }, 00:15:14.709 { 00:15:14.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.709 "dma_device_type": 2 00:15:14.709 } 00:15:14.709 ], 00:15:14.709 "driver_specific": { 00:15:14.709 "raid": { 00:15:14.709 "uuid": "ddae8038-73f2-4749-83b6-9a03beb54453", 00:15:14.709 "strip_size_kb": 0, 00:15:14.709 "state": "online", 00:15:14.709 "raid_level": "raid1", 00:15:14.709 "superblock": true, 00:15:14.709 "num_base_bdevs": 3, 00:15:14.709 "num_base_bdevs_discovered": 3, 00:15:14.709 "num_base_bdevs_operational": 3, 00:15:14.709 "base_bdevs_list": [ 00:15:14.709 { 00:15:14.709 "name": "BaseBdev1", 00:15:14.709 "uuid": "000ea749-6bfe-413a-a164-1e293c20b535", 00:15:14.709 "is_configured": true, 00:15:14.709 "data_offset": 2048, 00:15:14.709 "data_size": 63488 00:15:14.709 }, 00:15:14.709 { 00:15:14.709 "name": "BaseBdev2", 00:15:14.709 "uuid": "817c2f73-b812-4727-abd3-1ba8c845ba5c", 00:15:14.709 "is_configured": true, 00:15:14.710 "data_offset": 2048, 00:15:14.710 "data_size": 63488 00:15:14.710 }, 00:15:14.710 { 00:15:14.710 "name": "BaseBdev3", 00:15:14.710 "uuid": "18ebe296-ceb9-4129-aea7-759bdcec372a", 00:15:14.710 "is_configured": true, 00:15:14.710 "data_offset": 2048, 00:15:14.710 "data_size": 63488 00:15:14.710 } 00:15:14.710 ] 00:15:14.710 } 00:15:14.710 } 00:15:14.710 }' 00:15:14.710 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:14.710 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:14.710 BaseBdev2 00:15:14.710 BaseBdev3' 00:15:14.710 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.710 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:14.710 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.710 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:14.710 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.710 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.710 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.710 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.969 [2024-11-06 09:07:51.344432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.969 "name": "Existed_Raid", 00:15:14.969 "uuid": "ddae8038-73f2-4749-83b6-9a03beb54453", 00:15:14.969 "strip_size_kb": 0, 00:15:14.969 "state": "online", 00:15:14.969 "raid_level": "raid1", 00:15:14.969 "superblock": true, 00:15:14.969 "num_base_bdevs": 3, 00:15:14.969 "num_base_bdevs_discovered": 2, 00:15:14.969 "num_base_bdevs_operational": 2, 00:15:14.969 "base_bdevs_list": [ 00:15:14.969 { 00:15:14.969 "name": null, 00:15:14.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.969 "is_configured": false, 00:15:14.969 "data_offset": 0, 00:15:14.969 "data_size": 63488 00:15:14.969 }, 00:15:14.969 { 00:15:14.969 "name": "BaseBdev2", 00:15:14.969 "uuid": "817c2f73-b812-4727-abd3-1ba8c845ba5c", 00:15:14.969 "is_configured": true, 00:15:14.969 "data_offset": 2048, 00:15:14.969 "data_size": 63488 00:15:14.969 }, 00:15:14.969 { 00:15:14.969 "name": "BaseBdev3", 00:15:14.969 "uuid": "18ebe296-ceb9-4129-aea7-759bdcec372a", 00:15:14.969 "is_configured": true, 00:15:14.969 "data_offset": 2048, 00:15:14.969 "data_size": 63488 00:15:14.969 } 00:15:14.969 ] 00:15:14.969 }' 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.969 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.536 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:15.536 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:15.536 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.536 09:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:15.536 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.536 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.536 09:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.536 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:15.536 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.536 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:15.536 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.536 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.536 [2024-11-06 09:07:52.035581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.794 [2024-11-06 09:07:52.174855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:15.794 [2024-11-06 09:07:52.174984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.794 [2024-11-06 09:07:52.262618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.794 [2024-11-06 09:07:52.263035] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.794 [2024-11-06 09:07:52.263073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.794 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.052 BaseBdev2 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.052 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.052 [ 00:15:16.052 { 00:15:16.052 "name": "BaseBdev2", 00:15:16.052 "aliases": [ 00:15:16.052 "4a350ce7-0f8e-452e-aec7-49cd6a03689b" 00:15:16.052 ], 00:15:16.052 "product_name": "Malloc disk", 00:15:16.052 "block_size": 512, 00:15:16.052 "num_blocks": 65536, 00:15:16.052 "uuid": "4a350ce7-0f8e-452e-aec7-49cd6a03689b", 00:15:16.052 "assigned_rate_limits": { 00:15:16.052 "rw_ios_per_sec": 0, 00:15:16.052 "rw_mbytes_per_sec": 0, 00:15:16.052 "r_mbytes_per_sec": 0, 00:15:16.052 "w_mbytes_per_sec": 0 00:15:16.052 }, 00:15:16.052 "claimed": false, 00:15:16.052 "zoned": false, 00:15:16.052 "supported_io_types": { 00:15:16.052 "read": true, 00:15:16.052 "write": true, 00:15:16.052 "unmap": true, 00:15:16.052 "flush": true, 00:15:16.052 "reset": true, 00:15:16.052 "nvme_admin": false, 00:15:16.052 "nvme_io": false, 00:15:16.052 "nvme_io_md": false, 00:15:16.052 "write_zeroes": true, 00:15:16.052 "zcopy": true, 00:15:16.052 "get_zone_info": false, 00:15:16.053 "zone_management": false, 00:15:16.053 "zone_append": false, 00:15:16.053 "compare": false, 00:15:16.053 "compare_and_write": false, 00:15:16.053 "abort": true, 00:15:16.053 "seek_hole": false, 00:15:16.053 "seek_data": false, 00:15:16.053 "copy": true, 00:15:16.053 "nvme_iov_md": false 00:15:16.053 }, 00:15:16.053 "memory_domains": [ 00:15:16.053 { 00:15:16.053 "dma_device_id": "system", 00:15:16.053 "dma_device_type": 1 00:15:16.053 }, 00:15:16.053 { 00:15:16.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.053 "dma_device_type": 2 00:15:16.053 } 00:15:16.053 ], 00:15:16.053 "driver_specific": {} 00:15:16.053 } 00:15:16.053 ] 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.053 BaseBdev3 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.053 [ 00:15:16.053 { 00:15:16.053 "name": "BaseBdev3", 00:15:16.053 "aliases": [ 00:15:16.053 "e22cbd51-19f0-4659-b0ef-f3fc226431b3" 00:15:16.053 ], 00:15:16.053 "product_name": "Malloc disk", 00:15:16.053 "block_size": 512, 00:15:16.053 "num_blocks": 65536, 00:15:16.053 "uuid": "e22cbd51-19f0-4659-b0ef-f3fc226431b3", 00:15:16.053 "assigned_rate_limits": { 00:15:16.053 "rw_ios_per_sec": 0, 00:15:16.053 "rw_mbytes_per_sec": 0, 00:15:16.053 "r_mbytes_per_sec": 0, 00:15:16.053 "w_mbytes_per_sec": 0 00:15:16.053 }, 00:15:16.053 "claimed": false, 00:15:16.053 "zoned": false, 00:15:16.053 "supported_io_types": { 00:15:16.053 "read": true, 00:15:16.053 "write": true, 00:15:16.053 "unmap": true, 00:15:16.053 "flush": true, 00:15:16.053 "reset": true, 00:15:16.053 "nvme_admin": false, 00:15:16.053 "nvme_io": false, 00:15:16.053 "nvme_io_md": false, 00:15:16.053 "write_zeroes": true, 00:15:16.053 "zcopy": true, 00:15:16.053 "get_zone_info": false, 00:15:16.053 "zone_management": false, 00:15:16.053 "zone_append": false, 00:15:16.053 "compare": false, 00:15:16.053 "compare_and_write": false, 00:15:16.053 "abort": true, 00:15:16.053 "seek_hole": false, 00:15:16.053 "seek_data": false, 00:15:16.053 "copy": true, 00:15:16.053 "nvme_iov_md": false 00:15:16.053 }, 00:15:16.053 "memory_domains": [ 00:15:16.053 { 00:15:16.053 "dma_device_id": "system", 00:15:16.053 "dma_device_type": 1 00:15:16.053 }, 00:15:16.053 { 00:15:16.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.053 "dma_device_type": 2 00:15:16.053 } 00:15:16.053 ], 00:15:16.053 "driver_specific": {} 00:15:16.053 } 00:15:16.053 ] 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.053 [2024-11-06 09:07:52.464101] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:16.053 [2024-11-06 09:07:52.464168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:16.053 [2024-11-06 09:07:52.464203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.053 [2024-11-06 09:07:52.467095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.053 "name": "Existed_Raid", 00:15:16.053 "uuid": "d78c925e-30cf-4f00-a9a1-87c3d1dfa356", 00:15:16.053 "strip_size_kb": 0, 00:15:16.053 "state": "configuring", 00:15:16.053 "raid_level": "raid1", 00:15:16.053 "superblock": true, 00:15:16.053 "num_base_bdevs": 3, 00:15:16.053 "num_base_bdevs_discovered": 2, 00:15:16.053 "num_base_bdevs_operational": 3, 00:15:16.053 "base_bdevs_list": [ 00:15:16.053 { 00:15:16.053 "name": "BaseBdev1", 00:15:16.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.053 "is_configured": false, 00:15:16.053 "data_offset": 0, 00:15:16.053 "data_size": 0 00:15:16.053 }, 00:15:16.053 { 00:15:16.053 "name": "BaseBdev2", 00:15:16.053 "uuid": "4a350ce7-0f8e-452e-aec7-49cd6a03689b", 00:15:16.053 "is_configured": true, 00:15:16.053 "data_offset": 2048, 00:15:16.053 "data_size": 63488 00:15:16.053 }, 00:15:16.053 { 00:15:16.053 "name": "BaseBdev3", 00:15:16.053 "uuid": "e22cbd51-19f0-4659-b0ef-f3fc226431b3", 00:15:16.053 "is_configured": true, 00:15:16.053 "data_offset": 2048, 00:15:16.053 "data_size": 63488 00:15:16.053 } 00:15:16.053 ] 00:15:16.053 }' 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.053 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.624 [2024-11-06 09:07:52.968207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.624 09:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.624 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.624 "name": "Existed_Raid", 00:15:16.624 "uuid": "d78c925e-30cf-4f00-a9a1-87c3d1dfa356", 00:15:16.624 "strip_size_kb": 0, 00:15:16.624 "state": "configuring", 00:15:16.624 "raid_level": "raid1", 00:15:16.624 "superblock": true, 00:15:16.624 "num_base_bdevs": 3, 00:15:16.624 "num_base_bdevs_discovered": 1, 00:15:16.624 "num_base_bdevs_operational": 3, 00:15:16.624 "base_bdevs_list": [ 00:15:16.624 { 00:15:16.624 "name": "BaseBdev1", 00:15:16.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.624 "is_configured": false, 00:15:16.624 "data_offset": 0, 00:15:16.624 "data_size": 0 00:15:16.624 }, 00:15:16.624 { 00:15:16.624 "name": null, 00:15:16.624 "uuid": "4a350ce7-0f8e-452e-aec7-49cd6a03689b", 00:15:16.624 "is_configured": false, 00:15:16.624 "data_offset": 0, 00:15:16.624 "data_size": 63488 00:15:16.624 }, 00:15:16.624 { 00:15:16.624 "name": "BaseBdev3", 00:15:16.624 "uuid": "e22cbd51-19f0-4659-b0ef-f3fc226431b3", 00:15:16.625 "is_configured": true, 00:15:16.625 "data_offset": 2048, 00:15:16.625 "data_size": 63488 00:15:16.625 } 00:15:16.625 ] 00:15:16.625 }' 00:15:16.625 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.625 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.199 [2024-11-06 09:07:53.596398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.199 BaseBdev1 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.199 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.199 [ 00:15:17.199 { 00:15:17.199 "name": "BaseBdev1", 00:15:17.200 "aliases": [ 00:15:17.200 "95879bcd-c224-42f7-bd40-94ef26cb9f64" 00:15:17.200 ], 00:15:17.200 "product_name": "Malloc disk", 00:15:17.200 "block_size": 512, 00:15:17.200 "num_blocks": 65536, 00:15:17.200 "uuid": "95879bcd-c224-42f7-bd40-94ef26cb9f64", 00:15:17.200 "assigned_rate_limits": { 00:15:17.200 "rw_ios_per_sec": 0, 00:15:17.200 "rw_mbytes_per_sec": 0, 00:15:17.200 "r_mbytes_per_sec": 0, 00:15:17.200 "w_mbytes_per_sec": 0 00:15:17.200 }, 00:15:17.200 "claimed": true, 00:15:17.200 "claim_type": "exclusive_write", 00:15:17.200 "zoned": false, 00:15:17.200 "supported_io_types": { 00:15:17.200 "read": true, 00:15:17.200 "write": true, 00:15:17.200 "unmap": true, 00:15:17.200 "flush": true, 00:15:17.200 "reset": true, 00:15:17.200 "nvme_admin": false, 00:15:17.200 "nvme_io": false, 00:15:17.200 "nvme_io_md": false, 00:15:17.200 "write_zeroes": true, 00:15:17.200 "zcopy": true, 00:15:17.200 "get_zone_info": false, 00:15:17.200 "zone_management": false, 00:15:17.200 "zone_append": false, 00:15:17.200 "compare": false, 00:15:17.200 "compare_and_write": false, 00:15:17.200 "abort": true, 00:15:17.200 "seek_hole": false, 00:15:17.200 "seek_data": false, 00:15:17.200 "copy": true, 00:15:17.200 "nvme_iov_md": false 00:15:17.200 }, 00:15:17.200 "memory_domains": [ 00:15:17.200 { 00:15:17.200 "dma_device_id": "system", 00:15:17.200 "dma_device_type": 1 00:15:17.200 }, 00:15:17.200 { 00:15:17.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:17.200 "dma_device_type": 2 00:15:17.200 } 00:15:17.200 ], 00:15:17.200 "driver_specific": {} 00:15:17.200 } 00:15:17.200 ] 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.200 "name": "Existed_Raid", 00:15:17.200 "uuid": "d78c925e-30cf-4f00-a9a1-87c3d1dfa356", 00:15:17.200 "strip_size_kb": 0, 00:15:17.200 "state": "configuring", 00:15:17.200 "raid_level": "raid1", 00:15:17.200 "superblock": true, 00:15:17.200 "num_base_bdevs": 3, 00:15:17.200 "num_base_bdevs_discovered": 2, 00:15:17.200 "num_base_bdevs_operational": 3, 00:15:17.200 "base_bdevs_list": [ 00:15:17.200 { 00:15:17.200 "name": "BaseBdev1", 00:15:17.200 "uuid": "95879bcd-c224-42f7-bd40-94ef26cb9f64", 00:15:17.200 "is_configured": true, 00:15:17.200 "data_offset": 2048, 00:15:17.200 "data_size": 63488 00:15:17.200 }, 00:15:17.200 { 00:15:17.200 "name": null, 00:15:17.200 "uuid": "4a350ce7-0f8e-452e-aec7-49cd6a03689b", 00:15:17.200 "is_configured": false, 00:15:17.200 "data_offset": 0, 00:15:17.200 "data_size": 63488 00:15:17.200 }, 00:15:17.200 { 00:15:17.200 "name": "BaseBdev3", 00:15:17.200 "uuid": "e22cbd51-19f0-4659-b0ef-f3fc226431b3", 00:15:17.200 "is_configured": true, 00:15:17.200 "data_offset": 2048, 00:15:17.200 "data_size": 63488 00:15:17.200 } 00:15:17.200 ] 00:15:17.200 }' 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.200 09:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.768 [2024-11-06 09:07:54.196613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.768 "name": "Existed_Raid", 00:15:17.768 "uuid": "d78c925e-30cf-4f00-a9a1-87c3d1dfa356", 00:15:17.768 "strip_size_kb": 0, 00:15:17.768 "state": "configuring", 00:15:17.768 "raid_level": "raid1", 00:15:17.768 "superblock": true, 00:15:17.768 "num_base_bdevs": 3, 00:15:17.768 "num_base_bdevs_discovered": 1, 00:15:17.768 "num_base_bdevs_operational": 3, 00:15:17.768 "base_bdevs_list": [ 00:15:17.768 { 00:15:17.768 "name": "BaseBdev1", 00:15:17.768 "uuid": "95879bcd-c224-42f7-bd40-94ef26cb9f64", 00:15:17.768 "is_configured": true, 00:15:17.768 "data_offset": 2048, 00:15:17.768 "data_size": 63488 00:15:17.768 }, 00:15:17.768 { 00:15:17.768 "name": null, 00:15:17.768 "uuid": "4a350ce7-0f8e-452e-aec7-49cd6a03689b", 00:15:17.768 "is_configured": false, 00:15:17.768 "data_offset": 0, 00:15:17.768 "data_size": 63488 00:15:17.768 }, 00:15:17.768 { 00:15:17.768 "name": null, 00:15:17.768 "uuid": "e22cbd51-19f0-4659-b0ef-f3fc226431b3", 00:15:17.768 "is_configured": false, 00:15:17.768 "data_offset": 0, 00:15:17.768 "data_size": 63488 00:15:17.768 } 00:15:17.768 ] 00:15:17.768 }' 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.768 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.335 [2024-11-06 09:07:54.744802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.335 "name": "Existed_Raid", 00:15:18.335 "uuid": "d78c925e-30cf-4f00-a9a1-87c3d1dfa356", 00:15:18.335 "strip_size_kb": 0, 00:15:18.335 "state": "configuring", 00:15:18.335 "raid_level": "raid1", 00:15:18.335 "superblock": true, 00:15:18.335 "num_base_bdevs": 3, 00:15:18.335 "num_base_bdevs_discovered": 2, 00:15:18.335 "num_base_bdevs_operational": 3, 00:15:18.335 "base_bdevs_list": [ 00:15:18.335 { 00:15:18.335 "name": "BaseBdev1", 00:15:18.335 "uuid": "95879bcd-c224-42f7-bd40-94ef26cb9f64", 00:15:18.335 "is_configured": true, 00:15:18.335 "data_offset": 2048, 00:15:18.335 "data_size": 63488 00:15:18.335 }, 00:15:18.335 { 00:15:18.335 "name": null, 00:15:18.335 "uuid": "4a350ce7-0f8e-452e-aec7-49cd6a03689b", 00:15:18.335 "is_configured": false, 00:15:18.335 "data_offset": 0, 00:15:18.335 "data_size": 63488 00:15:18.335 }, 00:15:18.335 { 00:15:18.335 "name": "BaseBdev3", 00:15:18.335 "uuid": "e22cbd51-19f0-4659-b0ef-f3fc226431b3", 00:15:18.335 "is_configured": true, 00:15:18.335 "data_offset": 2048, 00:15:18.335 "data_size": 63488 00:15:18.335 } 00:15:18.335 ] 00:15:18.335 }' 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.335 09:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.903 [2024-11-06 09:07:55.324979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.903 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.904 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.904 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.162 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.162 "name": "Existed_Raid", 00:15:19.162 "uuid": "d78c925e-30cf-4f00-a9a1-87c3d1dfa356", 00:15:19.162 "strip_size_kb": 0, 00:15:19.162 "state": "configuring", 00:15:19.162 "raid_level": "raid1", 00:15:19.162 "superblock": true, 00:15:19.162 "num_base_bdevs": 3, 00:15:19.162 "num_base_bdevs_discovered": 1, 00:15:19.162 "num_base_bdevs_operational": 3, 00:15:19.163 "base_bdevs_list": [ 00:15:19.163 { 00:15:19.163 "name": null, 00:15:19.163 "uuid": "95879bcd-c224-42f7-bd40-94ef26cb9f64", 00:15:19.163 "is_configured": false, 00:15:19.163 "data_offset": 0, 00:15:19.163 "data_size": 63488 00:15:19.163 }, 00:15:19.163 { 00:15:19.163 "name": null, 00:15:19.163 "uuid": "4a350ce7-0f8e-452e-aec7-49cd6a03689b", 00:15:19.163 "is_configured": false, 00:15:19.163 "data_offset": 0, 00:15:19.163 "data_size": 63488 00:15:19.163 }, 00:15:19.163 { 00:15:19.163 "name": "BaseBdev3", 00:15:19.163 "uuid": "e22cbd51-19f0-4659-b0ef-f3fc226431b3", 00:15:19.163 "is_configured": true, 00:15:19.163 "data_offset": 2048, 00:15:19.163 "data_size": 63488 00:15:19.163 } 00:15:19.163 ] 00:15:19.163 }' 00:15:19.163 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.163 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.421 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.421 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.421 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.421 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:19.421 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.421 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.422 [2024-11-06 09:07:55.947474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.422 09:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.680 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.680 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.680 09:07:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.680 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.680 "name": "Existed_Raid", 00:15:19.680 "uuid": "d78c925e-30cf-4f00-a9a1-87c3d1dfa356", 00:15:19.680 "strip_size_kb": 0, 00:15:19.680 "state": "configuring", 00:15:19.680 "raid_level": "raid1", 00:15:19.680 "superblock": true, 00:15:19.680 "num_base_bdevs": 3, 00:15:19.680 "num_base_bdevs_discovered": 2, 00:15:19.680 "num_base_bdevs_operational": 3, 00:15:19.680 "base_bdevs_list": [ 00:15:19.680 { 00:15:19.680 "name": null, 00:15:19.680 "uuid": "95879bcd-c224-42f7-bd40-94ef26cb9f64", 00:15:19.680 "is_configured": false, 00:15:19.680 "data_offset": 0, 00:15:19.680 "data_size": 63488 00:15:19.680 }, 00:15:19.680 { 00:15:19.680 "name": "BaseBdev2", 00:15:19.680 "uuid": "4a350ce7-0f8e-452e-aec7-49cd6a03689b", 00:15:19.680 "is_configured": true, 00:15:19.680 "data_offset": 2048, 00:15:19.680 "data_size": 63488 00:15:19.680 }, 00:15:19.680 { 00:15:19.680 "name": "BaseBdev3", 00:15:19.680 "uuid": "e22cbd51-19f0-4659-b0ef-f3fc226431b3", 00:15:19.680 "is_configured": true, 00:15:19.680 "data_offset": 2048, 00:15:19.680 "data_size": 63488 00:15:19.680 } 00:15:19.680 ] 00:15:19.680 }' 00:15:19.680 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.680 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.939 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:19.939 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.939 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.939 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 95879bcd-c224-42f7-bd40-94ef26cb9f64 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.197 [2024-11-06 09:07:56.597651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:20.197 [2024-11-06 09:07:56.597957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:20.197 [2024-11-06 09:07:56.597977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:20.197 [2024-11-06 09:07:56.598284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:20.197 NewBaseBdev 00:15:20.197 [2024-11-06 09:07:56.598486] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:20.197 [2024-11-06 09:07:56.598539] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:20.197 [2024-11-06 09:07:56.598711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.197 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.197 [ 00:15:20.197 { 00:15:20.197 "name": "NewBaseBdev", 00:15:20.197 "aliases": [ 00:15:20.197 "95879bcd-c224-42f7-bd40-94ef26cb9f64" 00:15:20.197 ], 00:15:20.197 "product_name": "Malloc disk", 00:15:20.197 "block_size": 512, 00:15:20.197 "num_blocks": 65536, 00:15:20.197 "uuid": "95879bcd-c224-42f7-bd40-94ef26cb9f64", 00:15:20.197 "assigned_rate_limits": { 00:15:20.197 "rw_ios_per_sec": 0, 00:15:20.197 "rw_mbytes_per_sec": 0, 00:15:20.197 "r_mbytes_per_sec": 0, 00:15:20.197 "w_mbytes_per_sec": 0 00:15:20.197 }, 00:15:20.197 "claimed": true, 00:15:20.198 "claim_type": "exclusive_write", 00:15:20.198 "zoned": false, 00:15:20.198 "supported_io_types": { 00:15:20.198 "read": true, 00:15:20.198 "write": true, 00:15:20.198 "unmap": true, 00:15:20.198 "flush": true, 00:15:20.198 "reset": true, 00:15:20.198 "nvme_admin": false, 00:15:20.198 "nvme_io": false, 00:15:20.198 "nvme_io_md": false, 00:15:20.198 "write_zeroes": true, 00:15:20.198 "zcopy": true, 00:15:20.198 "get_zone_info": false, 00:15:20.198 "zone_management": false, 00:15:20.198 "zone_append": false, 00:15:20.198 "compare": false, 00:15:20.198 "compare_and_write": false, 00:15:20.198 "abort": true, 00:15:20.198 "seek_hole": false, 00:15:20.198 "seek_data": false, 00:15:20.198 "copy": true, 00:15:20.198 "nvme_iov_md": false 00:15:20.198 }, 00:15:20.198 "memory_domains": [ 00:15:20.198 { 00:15:20.198 "dma_device_id": "system", 00:15:20.198 "dma_device_type": 1 00:15:20.198 }, 00:15:20.198 { 00:15:20.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.198 "dma_device_type": 2 00:15:20.198 } 00:15:20.198 ], 00:15:20.198 "driver_specific": {} 00:15:20.198 } 00:15:20.198 ] 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.198 "name": "Existed_Raid", 00:15:20.198 "uuid": "d78c925e-30cf-4f00-a9a1-87c3d1dfa356", 00:15:20.198 "strip_size_kb": 0, 00:15:20.198 "state": "online", 00:15:20.198 "raid_level": "raid1", 00:15:20.198 "superblock": true, 00:15:20.198 "num_base_bdevs": 3, 00:15:20.198 "num_base_bdevs_discovered": 3, 00:15:20.198 "num_base_bdevs_operational": 3, 00:15:20.198 "base_bdevs_list": [ 00:15:20.198 { 00:15:20.198 "name": "NewBaseBdev", 00:15:20.198 "uuid": "95879bcd-c224-42f7-bd40-94ef26cb9f64", 00:15:20.198 "is_configured": true, 00:15:20.198 "data_offset": 2048, 00:15:20.198 "data_size": 63488 00:15:20.198 }, 00:15:20.198 { 00:15:20.198 "name": "BaseBdev2", 00:15:20.198 "uuid": "4a350ce7-0f8e-452e-aec7-49cd6a03689b", 00:15:20.198 "is_configured": true, 00:15:20.198 "data_offset": 2048, 00:15:20.198 "data_size": 63488 00:15:20.198 }, 00:15:20.198 { 00:15:20.198 "name": "BaseBdev3", 00:15:20.198 "uuid": "e22cbd51-19f0-4659-b0ef-f3fc226431b3", 00:15:20.198 "is_configured": true, 00:15:20.198 "data_offset": 2048, 00:15:20.198 "data_size": 63488 00:15:20.198 } 00:15:20.198 ] 00:15:20.198 }' 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.198 09:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.764 [2024-11-06 09:07:57.170252] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:20.764 "name": "Existed_Raid", 00:15:20.764 "aliases": [ 00:15:20.764 "d78c925e-30cf-4f00-a9a1-87c3d1dfa356" 00:15:20.764 ], 00:15:20.764 "product_name": "Raid Volume", 00:15:20.764 "block_size": 512, 00:15:20.764 "num_blocks": 63488, 00:15:20.764 "uuid": "d78c925e-30cf-4f00-a9a1-87c3d1dfa356", 00:15:20.764 "assigned_rate_limits": { 00:15:20.764 "rw_ios_per_sec": 0, 00:15:20.764 "rw_mbytes_per_sec": 0, 00:15:20.764 "r_mbytes_per_sec": 0, 00:15:20.764 "w_mbytes_per_sec": 0 00:15:20.764 }, 00:15:20.764 "claimed": false, 00:15:20.764 "zoned": false, 00:15:20.764 "supported_io_types": { 00:15:20.764 "read": true, 00:15:20.764 "write": true, 00:15:20.764 "unmap": false, 00:15:20.764 "flush": false, 00:15:20.764 "reset": true, 00:15:20.764 "nvme_admin": false, 00:15:20.764 "nvme_io": false, 00:15:20.764 "nvme_io_md": false, 00:15:20.764 "write_zeroes": true, 00:15:20.764 "zcopy": false, 00:15:20.764 "get_zone_info": false, 00:15:20.764 "zone_management": false, 00:15:20.764 "zone_append": false, 00:15:20.764 "compare": false, 00:15:20.764 "compare_and_write": false, 00:15:20.764 "abort": false, 00:15:20.764 "seek_hole": false, 00:15:20.764 "seek_data": false, 00:15:20.764 "copy": false, 00:15:20.764 "nvme_iov_md": false 00:15:20.764 }, 00:15:20.764 "memory_domains": [ 00:15:20.764 { 00:15:20.764 "dma_device_id": "system", 00:15:20.764 "dma_device_type": 1 00:15:20.764 }, 00:15:20.764 { 00:15:20.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.764 "dma_device_type": 2 00:15:20.764 }, 00:15:20.764 { 00:15:20.764 "dma_device_id": "system", 00:15:20.764 "dma_device_type": 1 00:15:20.764 }, 00:15:20.764 { 00:15:20.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.764 "dma_device_type": 2 00:15:20.764 }, 00:15:20.764 { 00:15:20.764 "dma_device_id": "system", 00:15:20.764 "dma_device_type": 1 00:15:20.764 }, 00:15:20.764 { 00:15:20.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.764 "dma_device_type": 2 00:15:20.764 } 00:15:20.764 ], 00:15:20.764 "driver_specific": { 00:15:20.764 "raid": { 00:15:20.764 "uuid": "d78c925e-30cf-4f00-a9a1-87c3d1dfa356", 00:15:20.764 "strip_size_kb": 0, 00:15:20.764 "state": "online", 00:15:20.764 "raid_level": "raid1", 00:15:20.764 "superblock": true, 00:15:20.764 "num_base_bdevs": 3, 00:15:20.764 "num_base_bdevs_discovered": 3, 00:15:20.764 "num_base_bdevs_operational": 3, 00:15:20.764 "base_bdevs_list": [ 00:15:20.764 { 00:15:20.764 "name": "NewBaseBdev", 00:15:20.764 "uuid": "95879bcd-c224-42f7-bd40-94ef26cb9f64", 00:15:20.764 "is_configured": true, 00:15:20.764 "data_offset": 2048, 00:15:20.764 "data_size": 63488 00:15:20.764 }, 00:15:20.764 { 00:15:20.764 "name": "BaseBdev2", 00:15:20.764 "uuid": "4a350ce7-0f8e-452e-aec7-49cd6a03689b", 00:15:20.764 "is_configured": true, 00:15:20.764 "data_offset": 2048, 00:15:20.764 "data_size": 63488 00:15:20.764 }, 00:15:20.764 { 00:15:20.764 "name": "BaseBdev3", 00:15:20.764 "uuid": "e22cbd51-19f0-4659-b0ef-f3fc226431b3", 00:15:20.764 "is_configured": true, 00:15:20.764 "data_offset": 2048, 00:15:20.764 "data_size": 63488 00:15:20.764 } 00:15:20.764 ] 00:15:20.764 } 00:15:20.764 } 00:15:20.764 }' 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:20.764 BaseBdev2 00:15:20.764 BaseBdev3' 00:15:20.764 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.023 [2024-11-06 09:07:57.481983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:21.023 [2024-11-06 09:07:57.482160] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.023 [2024-11-06 09:07:57.482280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.023 [2024-11-06 09:07:57.482657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.023 [2024-11-06 09:07:57.482676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68018 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 68018 ']' 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 68018 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68018 00:15:21.023 killing process with pid 68018 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68018' 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 68018 00:15:21.023 [2024-11-06 09:07:57.518858] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.023 09:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 68018 00:15:21.282 [2024-11-06 09:07:57.791541] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.657 09:07:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:22.657 00:15:22.657 real 0m11.759s 00:15:22.657 user 0m19.532s 00:15:22.657 sys 0m1.592s 00:15:22.657 09:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:22.657 ************************************ 00:15:22.657 END TEST raid_state_function_test_sb 00:15:22.658 ************************************ 00:15:22.658 09:07:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.658 09:07:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:15:22.658 09:07:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:22.658 09:07:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:22.658 09:07:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.658 ************************************ 00:15:22.658 START TEST raid_superblock_test 00:15:22.658 ************************************ 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68648 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68648 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 68648 ']' 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.658 09:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.658 [2024-11-06 09:07:58.977957] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:15:22.658 [2024-11-06 09:07:58.978373] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68648 ] 00:15:22.658 [2024-11-06 09:07:59.163189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.916 [2024-11-06 09:07:59.315678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.175 [2024-11-06 09:07:59.525910] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.175 [2024-11-06 09:07:59.526198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.742 09:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.742 malloc1 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.742 [2024-11-06 09:08:00.037454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:23.742 [2024-11-06 09:08:00.037657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.742 [2024-11-06 09:08:00.037702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:23.742 [2024-11-06 09:08:00.037718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.742 [2024-11-06 09:08:00.040547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.742 [2024-11-06 09:08:00.040596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:23.742 pt1 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.742 malloc2 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.742 [2024-11-06 09:08:00.089919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:23.742 [2024-11-06 09:08:00.089989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.742 [2024-11-06 09:08:00.090022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:23.742 [2024-11-06 09:08:00.090037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.742 [2024-11-06 09:08:00.092778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.742 [2024-11-06 09:08:00.092963] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:23.742 pt2 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.742 malloc3 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.742 [2024-11-06 09:08:00.151424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:23.742 [2024-11-06 09:08:00.151488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.742 [2024-11-06 09:08:00.151522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:23.742 [2024-11-06 09:08:00.151537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.742 [2024-11-06 09:08:00.154288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.742 [2024-11-06 09:08:00.154330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:23.742 pt3 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.742 [2024-11-06 09:08:00.159480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:23.742 [2024-11-06 09:08:00.162023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:23.742 [2024-11-06 09:08:00.162237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:23.742 [2024-11-06 09:08:00.162595] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:23.742 [2024-11-06 09:08:00.162740] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:23.742 [2024-11-06 09:08:00.163205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:23.742 [2024-11-06 09:08:00.163557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:23.742 [2024-11-06 09:08:00.163683] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:23.742 [2024-11-06 09:08:00.164105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.742 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.743 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.743 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.743 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.743 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.743 "name": "raid_bdev1", 00:15:23.743 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:23.743 "strip_size_kb": 0, 00:15:23.743 "state": "online", 00:15:23.743 "raid_level": "raid1", 00:15:23.743 "superblock": true, 00:15:23.743 "num_base_bdevs": 3, 00:15:23.743 "num_base_bdevs_discovered": 3, 00:15:23.743 "num_base_bdevs_operational": 3, 00:15:23.743 "base_bdevs_list": [ 00:15:23.743 { 00:15:23.743 "name": "pt1", 00:15:23.743 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:23.743 "is_configured": true, 00:15:23.743 "data_offset": 2048, 00:15:23.743 "data_size": 63488 00:15:23.743 }, 00:15:23.743 { 00:15:23.743 "name": "pt2", 00:15:23.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.743 "is_configured": true, 00:15:23.743 "data_offset": 2048, 00:15:23.743 "data_size": 63488 00:15:23.743 }, 00:15:23.743 { 00:15:23.743 "name": "pt3", 00:15:23.743 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:23.743 "is_configured": true, 00:15:23.743 "data_offset": 2048, 00:15:23.743 "data_size": 63488 00:15:23.743 } 00:15:23.743 ] 00:15:23.743 }' 00:15:23.743 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.743 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.309 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:24.309 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:24.309 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:24.309 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:24.309 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:24.309 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:24.309 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:24.309 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:24.309 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.309 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.309 [2024-11-06 09:08:00.676537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.309 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.309 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:24.309 "name": "raid_bdev1", 00:15:24.309 "aliases": [ 00:15:24.309 "17e58ea8-ec9c-4fa3-91b3-e95d730579c1" 00:15:24.309 ], 00:15:24.309 "product_name": "Raid Volume", 00:15:24.309 "block_size": 512, 00:15:24.309 "num_blocks": 63488, 00:15:24.309 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:24.309 "assigned_rate_limits": { 00:15:24.309 "rw_ios_per_sec": 0, 00:15:24.309 "rw_mbytes_per_sec": 0, 00:15:24.309 "r_mbytes_per_sec": 0, 00:15:24.309 "w_mbytes_per_sec": 0 00:15:24.309 }, 00:15:24.309 "claimed": false, 00:15:24.309 "zoned": false, 00:15:24.309 "supported_io_types": { 00:15:24.309 "read": true, 00:15:24.309 "write": true, 00:15:24.309 "unmap": false, 00:15:24.309 "flush": false, 00:15:24.309 "reset": true, 00:15:24.309 "nvme_admin": false, 00:15:24.309 "nvme_io": false, 00:15:24.309 "nvme_io_md": false, 00:15:24.309 "write_zeroes": true, 00:15:24.309 "zcopy": false, 00:15:24.309 "get_zone_info": false, 00:15:24.309 "zone_management": false, 00:15:24.309 "zone_append": false, 00:15:24.309 "compare": false, 00:15:24.309 "compare_and_write": false, 00:15:24.309 "abort": false, 00:15:24.309 "seek_hole": false, 00:15:24.309 "seek_data": false, 00:15:24.309 "copy": false, 00:15:24.309 "nvme_iov_md": false 00:15:24.309 }, 00:15:24.309 "memory_domains": [ 00:15:24.309 { 00:15:24.309 "dma_device_id": "system", 00:15:24.309 "dma_device_type": 1 00:15:24.309 }, 00:15:24.309 { 00:15:24.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.309 "dma_device_type": 2 00:15:24.309 }, 00:15:24.309 { 00:15:24.309 "dma_device_id": "system", 00:15:24.309 "dma_device_type": 1 00:15:24.309 }, 00:15:24.309 { 00:15:24.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.309 "dma_device_type": 2 00:15:24.309 }, 00:15:24.309 { 00:15:24.309 "dma_device_id": "system", 00:15:24.309 "dma_device_type": 1 00:15:24.309 }, 00:15:24.309 { 00:15:24.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.309 "dma_device_type": 2 00:15:24.309 } 00:15:24.309 ], 00:15:24.309 "driver_specific": { 00:15:24.309 "raid": { 00:15:24.309 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:24.309 "strip_size_kb": 0, 00:15:24.309 "state": "online", 00:15:24.309 "raid_level": "raid1", 00:15:24.309 "superblock": true, 00:15:24.309 "num_base_bdevs": 3, 00:15:24.309 "num_base_bdevs_discovered": 3, 00:15:24.309 "num_base_bdevs_operational": 3, 00:15:24.309 "base_bdevs_list": [ 00:15:24.309 { 00:15:24.309 "name": "pt1", 00:15:24.309 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:24.309 "is_configured": true, 00:15:24.309 "data_offset": 2048, 00:15:24.309 "data_size": 63488 00:15:24.309 }, 00:15:24.309 { 00:15:24.309 "name": "pt2", 00:15:24.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.309 "is_configured": true, 00:15:24.309 "data_offset": 2048, 00:15:24.309 "data_size": 63488 00:15:24.309 }, 00:15:24.309 { 00:15:24.309 "name": "pt3", 00:15:24.310 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:24.310 "is_configured": true, 00:15:24.310 "data_offset": 2048, 00:15:24.310 "data_size": 63488 00:15:24.310 } 00:15:24.310 ] 00:15:24.310 } 00:15:24.310 } 00:15:24.310 }' 00:15:24.310 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:24.310 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:24.310 pt2 00:15:24.310 pt3' 00:15:24.310 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.310 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:24.310 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.310 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.310 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:24.310 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.310 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.310 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.568 09:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.568 [2024-11-06 09:08:00.984573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=17e58ea8-ec9c-4fa3-91b3-e95d730579c1 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 17e58ea8-ec9c-4fa3-91b3-e95d730579c1 ']' 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.568 [2024-11-06 09:08:01.036237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.568 [2024-11-06 09:08:01.036408] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.568 [2024-11-06 09:08:01.036528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.568 [2024-11-06 09:08:01.036626] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.568 [2024-11-06 09:08:01.036643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.568 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:24.569 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:24.569 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:24.569 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:24.569 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.569 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.569 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.569 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:24.569 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:24.569 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.569 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.827 [2024-11-06 09:08:01.180340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:24.827 [2024-11-06 09:08:01.182898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:24.827 [2024-11-06 09:08:01.182975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:24.827 [2024-11-06 09:08:01.183052] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:24.827 [2024-11-06 09:08:01.183131] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:24.827 [2024-11-06 09:08:01.183169] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:24.827 [2024-11-06 09:08:01.183200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.827 [2024-11-06 09:08:01.183214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:24.827 request: 00:15:24.827 { 00:15:24.827 "name": "raid_bdev1", 00:15:24.827 "raid_level": "raid1", 00:15:24.827 "base_bdevs": [ 00:15:24.827 "malloc1", 00:15:24.827 "malloc2", 00:15:24.827 "malloc3" 00:15:24.827 ], 00:15:24.827 "superblock": false, 00:15:24.827 "method": "bdev_raid_create", 00:15:24.827 "req_id": 1 00:15:24.827 } 00:15:24.827 Got JSON-RPC error response 00:15:24.827 response: 00:15:24.827 { 00:15:24.827 "code": -17, 00:15:24.827 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:24.827 } 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.827 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.828 [2024-11-06 09:08:01.240310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:24.828 [2024-11-06 09:08:01.240393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.828 [2024-11-06 09:08:01.240430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:24.828 [2024-11-06 09:08:01.240445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.828 [2024-11-06 09:08:01.243322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.828 [2024-11-06 09:08:01.243369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:24.828 [2024-11-06 09:08:01.243481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:24.828 [2024-11-06 09:08:01.243548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:24.828 pt1 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.828 "name": "raid_bdev1", 00:15:24.828 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:24.828 "strip_size_kb": 0, 00:15:24.828 "state": "configuring", 00:15:24.828 "raid_level": "raid1", 00:15:24.828 "superblock": true, 00:15:24.828 "num_base_bdevs": 3, 00:15:24.828 "num_base_bdevs_discovered": 1, 00:15:24.828 "num_base_bdevs_operational": 3, 00:15:24.828 "base_bdevs_list": [ 00:15:24.828 { 00:15:24.828 "name": "pt1", 00:15:24.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:24.828 "is_configured": true, 00:15:24.828 "data_offset": 2048, 00:15:24.828 "data_size": 63488 00:15:24.828 }, 00:15:24.828 { 00:15:24.828 "name": null, 00:15:24.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:24.828 "is_configured": false, 00:15:24.828 "data_offset": 2048, 00:15:24.828 "data_size": 63488 00:15:24.828 }, 00:15:24.828 { 00:15:24.828 "name": null, 00:15:24.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:24.828 "is_configured": false, 00:15:24.828 "data_offset": 2048, 00:15:24.828 "data_size": 63488 00:15:24.828 } 00:15:24.828 ] 00:15:24.828 }' 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.828 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.393 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:25.393 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:25.393 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.393 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.393 [2024-11-06 09:08:01.776429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:25.393 [2024-11-06 09:08:01.776508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.393 [2024-11-06 09:08:01.776541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:25.393 [2024-11-06 09:08:01.776556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.393 [2024-11-06 09:08:01.777123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.393 [2024-11-06 09:08:01.777155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:25.393 [2024-11-06 09:08:01.777268] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:25.394 [2024-11-06 09:08:01.777300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:25.394 pt2 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.394 [2024-11-06 09:08:01.784415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.394 "name": "raid_bdev1", 00:15:25.394 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:25.394 "strip_size_kb": 0, 00:15:25.394 "state": "configuring", 00:15:25.394 "raid_level": "raid1", 00:15:25.394 "superblock": true, 00:15:25.394 "num_base_bdevs": 3, 00:15:25.394 "num_base_bdevs_discovered": 1, 00:15:25.394 "num_base_bdevs_operational": 3, 00:15:25.394 "base_bdevs_list": [ 00:15:25.394 { 00:15:25.394 "name": "pt1", 00:15:25.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:25.394 "is_configured": true, 00:15:25.394 "data_offset": 2048, 00:15:25.394 "data_size": 63488 00:15:25.394 }, 00:15:25.394 { 00:15:25.394 "name": null, 00:15:25.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.394 "is_configured": false, 00:15:25.394 "data_offset": 0, 00:15:25.394 "data_size": 63488 00:15:25.394 }, 00:15:25.394 { 00:15:25.394 "name": null, 00:15:25.394 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:25.394 "is_configured": false, 00:15:25.394 "data_offset": 2048, 00:15:25.394 "data_size": 63488 00:15:25.394 } 00:15:25.394 ] 00:15:25.394 }' 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.394 09:08:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.960 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:25.960 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:25.960 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:25.960 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.960 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.960 [2024-11-06 09:08:02.312563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:25.960 [2024-11-06 09:08:02.312664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.960 [2024-11-06 09:08:02.312692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:25.960 [2024-11-06 09:08:02.312709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.960 [2024-11-06 09:08:02.313313] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.960 [2024-11-06 09:08:02.313345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:25.960 [2024-11-06 09:08:02.313448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:25.960 [2024-11-06 09:08:02.313509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:25.960 pt2 00:15:25.960 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.960 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:25.960 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:25.960 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:25.960 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.961 [2024-11-06 09:08:02.324562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:25.961 [2024-11-06 09:08:02.324649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.961 [2024-11-06 09:08:02.324682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:25.961 [2024-11-06 09:08:02.324703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.961 [2024-11-06 09:08:02.325289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.961 [2024-11-06 09:08:02.325330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:25.961 [2024-11-06 09:08:02.325431] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:25.961 [2024-11-06 09:08:02.325467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:25.961 [2024-11-06 09:08:02.325635] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:25.961 [2024-11-06 09:08:02.325658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:25.961 [2024-11-06 09:08:02.325987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:25.961 [2024-11-06 09:08:02.326190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:25.961 [2024-11-06 09:08:02.326206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:25.961 [2024-11-06 09:08:02.326382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.961 pt3 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.961 "name": "raid_bdev1", 00:15:25.961 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:25.961 "strip_size_kb": 0, 00:15:25.961 "state": "online", 00:15:25.961 "raid_level": "raid1", 00:15:25.961 "superblock": true, 00:15:25.961 "num_base_bdevs": 3, 00:15:25.961 "num_base_bdevs_discovered": 3, 00:15:25.961 "num_base_bdevs_operational": 3, 00:15:25.961 "base_bdevs_list": [ 00:15:25.961 { 00:15:25.961 "name": "pt1", 00:15:25.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:25.961 "is_configured": true, 00:15:25.961 "data_offset": 2048, 00:15:25.961 "data_size": 63488 00:15:25.961 }, 00:15:25.961 { 00:15:25.961 "name": "pt2", 00:15:25.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:25.961 "is_configured": true, 00:15:25.961 "data_offset": 2048, 00:15:25.961 "data_size": 63488 00:15:25.961 }, 00:15:25.961 { 00:15:25.961 "name": "pt3", 00:15:25.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:25.961 "is_configured": true, 00:15:25.961 "data_offset": 2048, 00:15:25.961 "data_size": 63488 00:15:25.961 } 00:15:25.961 ] 00:15:25.961 }' 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.961 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.530 [2024-11-06 09:08:02.841069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.530 "name": "raid_bdev1", 00:15:26.530 "aliases": [ 00:15:26.530 "17e58ea8-ec9c-4fa3-91b3-e95d730579c1" 00:15:26.530 ], 00:15:26.530 "product_name": "Raid Volume", 00:15:26.530 "block_size": 512, 00:15:26.530 "num_blocks": 63488, 00:15:26.530 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:26.530 "assigned_rate_limits": { 00:15:26.530 "rw_ios_per_sec": 0, 00:15:26.530 "rw_mbytes_per_sec": 0, 00:15:26.530 "r_mbytes_per_sec": 0, 00:15:26.530 "w_mbytes_per_sec": 0 00:15:26.530 }, 00:15:26.530 "claimed": false, 00:15:26.530 "zoned": false, 00:15:26.530 "supported_io_types": { 00:15:26.530 "read": true, 00:15:26.530 "write": true, 00:15:26.530 "unmap": false, 00:15:26.530 "flush": false, 00:15:26.530 "reset": true, 00:15:26.530 "nvme_admin": false, 00:15:26.530 "nvme_io": false, 00:15:26.530 "nvme_io_md": false, 00:15:26.530 "write_zeroes": true, 00:15:26.530 "zcopy": false, 00:15:26.530 "get_zone_info": false, 00:15:26.530 "zone_management": false, 00:15:26.530 "zone_append": false, 00:15:26.530 "compare": false, 00:15:26.530 "compare_and_write": false, 00:15:26.530 "abort": false, 00:15:26.530 "seek_hole": false, 00:15:26.530 "seek_data": false, 00:15:26.530 "copy": false, 00:15:26.530 "nvme_iov_md": false 00:15:26.530 }, 00:15:26.530 "memory_domains": [ 00:15:26.530 { 00:15:26.530 "dma_device_id": "system", 00:15:26.530 "dma_device_type": 1 00:15:26.530 }, 00:15:26.530 { 00:15:26.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.530 "dma_device_type": 2 00:15:26.530 }, 00:15:26.530 { 00:15:26.530 "dma_device_id": "system", 00:15:26.530 "dma_device_type": 1 00:15:26.530 }, 00:15:26.530 { 00:15:26.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.530 "dma_device_type": 2 00:15:26.530 }, 00:15:26.530 { 00:15:26.530 "dma_device_id": "system", 00:15:26.530 "dma_device_type": 1 00:15:26.530 }, 00:15:26.530 { 00:15:26.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.530 "dma_device_type": 2 00:15:26.530 } 00:15:26.530 ], 00:15:26.530 "driver_specific": { 00:15:26.530 "raid": { 00:15:26.530 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:26.530 "strip_size_kb": 0, 00:15:26.530 "state": "online", 00:15:26.530 "raid_level": "raid1", 00:15:26.530 "superblock": true, 00:15:26.530 "num_base_bdevs": 3, 00:15:26.530 "num_base_bdevs_discovered": 3, 00:15:26.530 "num_base_bdevs_operational": 3, 00:15:26.530 "base_bdevs_list": [ 00:15:26.530 { 00:15:26.530 "name": "pt1", 00:15:26.530 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:26.530 "is_configured": true, 00:15:26.530 "data_offset": 2048, 00:15:26.530 "data_size": 63488 00:15:26.530 }, 00:15:26.530 { 00:15:26.530 "name": "pt2", 00:15:26.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.530 "is_configured": true, 00:15:26.530 "data_offset": 2048, 00:15:26.530 "data_size": 63488 00:15:26.530 }, 00:15:26.530 { 00:15:26.530 "name": "pt3", 00:15:26.530 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:26.530 "is_configured": true, 00:15:26.530 "data_offset": 2048, 00:15:26.530 "data_size": 63488 00:15:26.530 } 00:15:26.530 ] 00:15:26.530 } 00:15:26.530 } 00:15:26.530 }' 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:26.530 pt2 00:15:26.530 pt3' 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.530 09:08:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.530 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.530 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.530 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.530 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.530 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:26.530 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.530 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.530 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.530 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.789 [2024-11-06 09:08:03.149105] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 17e58ea8-ec9c-4fa3-91b3-e95d730579c1 '!=' 17e58ea8-ec9c-4fa3-91b3-e95d730579c1 ']' 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.789 [2024-11-06 09:08:03.200862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.789 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.789 "name": "raid_bdev1", 00:15:26.789 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:26.789 "strip_size_kb": 0, 00:15:26.789 "state": "online", 00:15:26.789 "raid_level": "raid1", 00:15:26.789 "superblock": true, 00:15:26.789 "num_base_bdevs": 3, 00:15:26.789 "num_base_bdevs_discovered": 2, 00:15:26.789 "num_base_bdevs_operational": 2, 00:15:26.789 "base_bdevs_list": [ 00:15:26.789 { 00:15:26.789 "name": null, 00:15:26.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.789 "is_configured": false, 00:15:26.789 "data_offset": 0, 00:15:26.789 "data_size": 63488 00:15:26.789 }, 00:15:26.789 { 00:15:26.789 "name": "pt2", 00:15:26.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.790 "is_configured": true, 00:15:26.790 "data_offset": 2048, 00:15:26.790 "data_size": 63488 00:15:26.790 }, 00:15:26.790 { 00:15:26.790 "name": "pt3", 00:15:26.790 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:26.790 "is_configured": true, 00:15:26.790 "data_offset": 2048, 00:15:26.790 "data_size": 63488 00:15:26.790 } 00:15:26.790 ] 00:15:26.790 }' 00:15:26.790 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.790 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.355 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:27.355 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.355 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.355 [2024-11-06 09:08:03.732940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.355 [2024-11-06 09:08:03.733103] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.355 [2024-11-06 09:08:03.733220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.355 [2024-11-06 09:08:03.733298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.356 [2024-11-06 09:08:03.733321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.356 [2024-11-06 09:08:03.820977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:27.356 [2024-11-06 09:08:03.821065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.356 [2024-11-06 09:08:03.821093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:27.356 [2024-11-06 09:08:03.821110] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.356 [2024-11-06 09:08:03.823980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.356 [2024-11-06 09:08:03.824151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:27.356 [2024-11-06 09:08:03.824274] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:27.356 [2024-11-06 09:08:03.824340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:27.356 pt2 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.356 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.613 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.613 "name": "raid_bdev1", 00:15:27.613 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:27.613 "strip_size_kb": 0, 00:15:27.613 "state": "configuring", 00:15:27.613 "raid_level": "raid1", 00:15:27.613 "superblock": true, 00:15:27.613 "num_base_bdevs": 3, 00:15:27.613 "num_base_bdevs_discovered": 1, 00:15:27.613 "num_base_bdevs_operational": 2, 00:15:27.613 "base_bdevs_list": [ 00:15:27.613 { 00:15:27.613 "name": null, 00:15:27.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.613 "is_configured": false, 00:15:27.613 "data_offset": 2048, 00:15:27.613 "data_size": 63488 00:15:27.613 }, 00:15:27.613 { 00:15:27.613 "name": "pt2", 00:15:27.613 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.613 "is_configured": true, 00:15:27.613 "data_offset": 2048, 00:15:27.613 "data_size": 63488 00:15:27.613 }, 00:15:27.613 { 00:15:27.613 "name": null, 00:15:27.613 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:27.613 "is_configured": false, 00:15:27.613 "data_offset": 2048, 00:15:27.613 "data_size": 63488 00:15:27.613 } 00:15:27.613 ] 00:15:27.613 }' 00:15:27.613 09:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.613 09:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.873 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:27.873 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:27.873 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:27.873 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:27.873 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.873 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.873 [2024-11-06 09:08:04.361082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:27.873 [2024-11-06 09:08:04.361164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.873 [2024-11-06 09:08:04.361193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:27.873 [2024-11-06 09:08:04.361210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.873 [2024-11-06 09:08:04.361770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.873 [2024-11-06 09:08:04.361802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:27.873 [2024-11-06 09:08:04.361951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:27.873 [2024-11-06 09:08:04.361993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:27.873 [2024-11-06 09:08:04.362138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:27.873 [2024-11-06 09:08:04.362159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:27.873 [2024-11-06 09:08:04.362487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:27.873 [2024-11-06 09:08:04.362708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:27.873 [2024-11-06 09:08:04.362724] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:27.873 [2024-11-06 09:08:04.362913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.873 pt3 00:15:27.873 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.874 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.150 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.150 "name": "raid_bdev1", 00:15:28.150 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:28.150 "strip_size_kb": 0, 00:15:28.150 "state": "online", 00:15:28.150 "raid_level": "raid1", 00:15:28.150 "superblock": true, 00:15:28.150 "num_base_bdevs": 3, 00:15:28.150 "num_base_bdevs_discovered": 2, 00:15:28.150 "num_base_bdevs_operational": 2, 00:15:28.150 "base_bdevs_list": [ 00:15:28.150 { 00:15:28.150 "name": null, 00:15:28.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.150 "is_configured": false, 00:15:28.150 "data_offset": 2048, 00:15:28.150 "data_size": 63488 00:15:28.150 }, 00:15:28.150 { 00:15:28.150 "name": "pt2", 00:15:28.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.150 "is_configured": true, 00:15:28.150 "data_offset": 2048, 00:15:28.150 "data_size": 63488 00:15:28.150 }, 00:15:28.150 { 00:15:28.150 "name": "pt3", 00:15:28.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:28.150 "is_configured": true, 00:15:28.150 "data_offset": 2048, 00:15:28.150 "data_size": 63488 00:15:28.150 } 00:15:28.150 ] 00:15:28.150 }' 00:15:28.150 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.150 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.408 [2024-11-06 09:08:04.865196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:28.408 [2024-11-06 09:08:04.865237] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:28.408 [2024-11-06 09:08:04.865334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:28.408 [2024-11-06 09:08:04.865417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:28.408 [2024-11-06 09:08:04.865432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.408 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.408 [2024-11-06 09:08:04.937250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:28.408 [2024-11-06 09:08:04.937328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.408 [2024-11-06 09:08:04.937361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:28.408 [2024-11-06 09:08:04.937382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.408 [2024-11-06 09:08:04.940279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.408 [2024-11-06 09:08:04.940325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:28.408 [2024-11-06 09:08:04.940437] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:28.408 [2024-11-06 09:08:04.940495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:28.408 [2024-11-06 09:08:04.940661] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:28.408 [2024-11-06 09:08:04.940679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:28.408 [2024-11-06 09:08:04.940703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:28.408 [2024-11-06 09:08:04.940776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:28.408 pt1 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.666 "name": "raid_bdev1", 00:15:28.666 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:28.666 "strip_size_kb": 0, 00:15:28.666 "state": "configuring", 00:15:28.666 "raid_level": "raid1", 00:15:28.666 "superblock": true, 00:15:28.666 "num_base_bdevs": 3, 00:15:28.666 "num_base_bdevs_discovered": 1, 00:15:28.666 "num_base_bdevs_operational": 2, 00:15:28.666 "base_bdevs_list": [ 00:15:28.666 { 00:15:28.666 "name": null, 00:15:28.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.666 "is_configured": false, 00:15:28.666 "data_offset": 2048, 00:15:28.666 "data_size": 63488 00:15:28.666 }, 00:15:28.666 { 00:15:28.666 "name": "pt2", 00:15:28.666 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.666 "is_configured": true, 00:15:28.666 "data_offset": 2048, 00:15:28.666 "data_size": 63488 00:15:28.666 }, 00:15:28.666 { 00:15:28.666 "name": null, 00:15:28.666 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:28.666 "is_configured": false, 00:15:28.666 "data_offset": 2048, 00:15:28.666 "data_size": 63488 00:15:28.666 } 00:15:28.666 ] 00:15:28.666 }' 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.666 09:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.232 [2024-11-06 09:08:05.513408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:29.232 [2024-11-06 09:08:05.513484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:29.232 [2024-11-06 09:08:05.513516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:29.232 [2024-11-06 09:08:05.513530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:29.232 [2024-11-06 09:08:05.514118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:29.232 [2024-11-06 09:08:05.514150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:29.232 [2024-11-06 09:08:05.514256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:29.232 [2024-11-06 09:08:05.514315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:29.232 [2024-11-06 09:08:05.514475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:29.232 [2024-11-06 09:08:05.514491] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:29.232 [2024-11-06 09:08:05.514830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:29.232 [2024-11-06 09:08:05.515040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:29.232 [2024-11-06 09:08:05.515061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:29.232 [2024-11-06 09:08:05.515226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.232 pt3 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.232 09:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.233 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.233 "name": "raid_bdev1", 00:15:29.233 "uuid": "17e58ea8-ec9c-4fa3-91b3-e95d730579c1", 00:15:29.233 "strip_size_kb": 0, 00:15:29.233 "state": "online", 00:15:29.233 "raid_level": "raid1", 00:15:29.233 "superblock": true, 00:15:29.233 "num_base_bdevs": 3, 00:15:29.233 "num_base_bdevs_discovered": 2, 00:15:29.233 "num_base_bdevs_operational": 2, 00:15:29.233 "base_bdevs_list": [ 00:15:29.233 { 00:15:29.233 "name": null, 00:15:29.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.233 "is_configured": false, 00:15:29.233 "data_offset": 2048, 00:15:29.233 "data_size": 63488 00:15:29.233 }, 00:15:29.233 { 00:15:29.233 "name": "pt2", 00:15:29.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.233 "is_configured": true, 00:15:29.233 "data_offset": 2048, 00:15:29.233 "data_size": 63488 00:15:29.233 }, 00:15:29.233 { 00:15:29.233 "name": "pt3", 00:15:29.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:29.233 "is_configured": true, 00:15:29.233 "data_offset": 2048, 00:15:29.233 "data_size": 63488 00:15:29.233 } 00:15:29.233 ] 00:15:29.233 }' 00:15:29.233 09:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.233 09:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:29.798 [2024-11-06 09:08:06.113887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 17e58ea8-ec9c-4fa3-91b3-e95d730579c1 '!=' 17e58ea8-ec9c-4fa3-91b3-e95d730579c1 ']' 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68648 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 68648 ']' 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 68648 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68648 00:15:29.798 killing process with pid 68648 00:15:29.798 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:29.799 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:29.799 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68648' 00:15:29.799 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 68648 00:15:29.799 [2024-11-06 09:08:06.187889] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.799 09:08:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 68648 00:15:29.799 [2024-11-06 09:08:06.188002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.799 [2024-11-06 09:08:06.188081] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.799 [2024-11-06 09:08:06.188101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:30.056 [2024-11-06 09:08:06.460458] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.990 ************************************ 00:15:30.990 END TEST raid_superblock_test 00:15:30.990 ************************************ 00:15:30.990 09:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:30.990 00:15:30.990 real 0m8.611s 00:15:30.990 user 0m14.123s 00:15:30.990 sys 0m1.235s 00:15:30.990 09:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:30.990 09:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.990 09:08:07 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:15:30.990 09:08:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:30.990 09:08:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:30.990 09:08:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.248 ************************************ 00:15:31.248 START TEST raid_read_error_test 00:15:31.248 ************************************ 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:31.248 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Hx5wK8LPlz 00:15:31.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69101 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69101 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 69101 ']' 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:31.249 09:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.249 [2024-11-06 09:08:07.668388] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:15:31.249 [2024-11-06 09:08:07.668632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69101 ] 00:15:31.507 [2024-11-06 09:08:07.867057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.507 [2024-11-06 09:08:07.994837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.765 [2024-11-06 09:08:08.196588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.765 [2024-11-06 09:08:08.196671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.331 BaseBdev1_malloc 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.331 true 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.331 [2024-11-06 09:08:08.676847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:32.331 [2024-11-06 09:08:08.676929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.331 [2024-11-06 09:08:08.676977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:32.331 [2024-11-06 09:08:08.677005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.331 [2024-11-06 09:08:08.680203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.331 [2024-11-06 09:08:08.681004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:32.331 BaseBdev1 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.331 BaseBdev2_malloc 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.331 true 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.331 [2024-11-06 09:08:08.741255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:32.331 [2024-11-06 09:08:08.741338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.331 [2024-11-06 09:08:08.741379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:32.331 [2024-11-06 09:08:08.741407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.331 [2024-11-06 09:08:08.744496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.331 [2024-11-06 09:08:08.744725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:32.331 BaseBdev2 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.331 BaseBdev3_malloc 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.331 true 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.331 [2024-11-06 09:08:08.823939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:32.331 [2024-11-06 09:08:08.824213] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.331 [2024-11-06 09:08:08.824275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:32.331 [2024-11-06 09:08:08.824312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.331 [2024-11-06 09:08:08.827452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.331 [2024-11-06 09:08:08.827631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:32.331 BaseBdev3 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.331 [2024-11-06 09:08:08.836161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.331 [2024-11-06 09:08:08.838635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.331 [2024-11-06 09:08:08.838739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:32.331 [2024-11-06 09:08:08.839310] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:32.331 [2024-11-06 09:08:08.839448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:32.331 [2024-11-06 09:08:08.840046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:32.331 [2024-11-06 09:08:08.840506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:32.331 [2024-11-06 09:08:08.840647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:32.331 [2024-11-06 09:08:08.841179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.331 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.590 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.590 "name": "raid_bdev1", 00:15:32.590 "uuid": "aedfa7bf-779d-4945-a864-4cebdc7302ae", 00:15:32.590 "strip_size_kb": 0, 00:15:32.590 "state": "online", 00:15:32.590 "raid_level": "raid1", 00:15:32.590 "superblock": true, 00:15:32.590 "num_base_bdevs": 3, 00:15:32.590 "num_base_bdevs_discovered": 3, 00:15:32.590 "num_base_bdevs_operational": 3, 00:15:32.590 "base_bdevs_list": [ 00:15:32.590 { 00:15:32.590 "name": "BaseBdev1", 00:15:32.590 "uuid": "e97080f1-7e3e-5f57-a3f8-7606b3940921", 00:15:32.590 "is_configured": true, 00:15:32.590 "data_offset": 2048, 00:15:32.590 "data_size": 63488 00:15:32.590 }, 00:15:32.590 { 00:15:32.590 "name": "BaseBdev2", 00:15:32.590 "uuid": "5dff6c19-b386-56b3-a39e-04468d170bb2", 00:15:32.590 "is_configured": true, 00:15:32.590 "data_offset": 2048, 00:15:32.590 "data_size": 63488 00:15:32.590 }, 00:15:32.590 { 00:15:32.590 "name": "BaseBdev3", 00:15:32.590 "uuid": "49775e83-bf47-50c4-a08d-8623737a118c", 00:15:32.590 "is_configured": true, 00:15:32.590 "data_offset": 2048, 00:15:32.590 "data_size": 63488 00:15:32.590 } 00:15:32.590 ] 00:15:32.590 }' 00:15:32.590 09:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.590 09:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.848 09:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:32.848 09:08:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:33.106 [2024-11-06 09:08:09.510630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.040 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.041 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.041 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.041 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.041 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.041 "name": "raid_bdev1", 00:15:34.041 "uuid": "aedfa7bf-779d-4945-a864-4cebdc7302ae", 00:15:34.041 "strip_size_kb": 0, 00:15:34.041 "state": "online", 00:15:34.041 "raid_level": "raid1", 00:15:34.041 "superblock": true, 00:15:34.041 "num_base_bdevs": 3, 00:15:34.041 "num_base_bdevs_discovered": 3, 00:15:34.041 "num_base_bdevs_operational": 3, 00:15:34.041 "base_bdevs_list": [ 00:15:34.041 { 00:15:34.041 "name": "BaseBdev1", 00:15:34.041 "uuid": "e97080f1-7e3e-5f57-a3f8-7606b3940921", 00:15:34.041 "is_configured": true, 00:15:34.041 "data_offset": 2048, 00:15:34.041 "data_size": 63488 00:15:34.041 }, 00:15:34.041 { 00:15:34.041 "name": "BaseBdev2", 00:15:34.041 "uuid": "5dff6c19-b386-56b3-a39e-04468d170bb2", 00:15:34.041 "is_configured": true, 00:15:34.041 "data_offset": 2048, 00:15:34.041 "data_size": 63488 00:15:34.041 }, 00:15:34.041 { 00:15:34.041 "name": "BaseBdev3", 00:15:34.041 "uuid": "49775e83-bf47-50c4-a08d-8623737a118c", 00:15:34.041 "is_configured": true, 00:15:34.041 "data_offset": 2048, 00:15:34.041 "data_size": 63488 00:15:34.041 } 00:15:34.041 ] 00:15:34.041 }' 00:15:34.041 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.041 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.608 [2024-11-06 09:08:10.910803] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.608 [2024-11-06 09:08:10.910852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.608 [2024-11-06 09:08:10.914147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.608 [2024-11-06 09:08:10.914208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.608 [2024-11-06 09:08:10.914346] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.608 [2024-11-06 09:08:10.914362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:34.608 { 00:15:34.608 "results": [ 00:15:34.608 { 00:15:34.608 "job": "raid_bdev1", 00:15:34.608 "core_mask": "0x1", 00:15:34.608 "workload": "randrw", 00:15:34.608 "percentage": 50, 00:15:34.608 "status": "finished", 00:15:34.608 "queue_depth": 1, 00:15:34.608 "io_size": 131072, 00:15:34.608 "runtime": 1.397458, 00:15:34.608 "iops": 9277.559683367945, 00:15:34.608 "mibps": 1159.694960420993, 00:15:34.608 "io_failed": 0, 00:15:34.608 "io_timeout": 0, 00:15:34.608 "avg_latency_us": 103.62380619149458, 00:15:34.608 "min_latency_us": 43.054545454545455, 00:15:34.608 "max_latency_us": 1921.3963636363637 00:15:34.608 } 00:15:34.608 ], 00:15:34.608 "core_count": 1 00:15:34.608 } 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69101 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 69101 ']' 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 69101 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69101 00:15:34.608 killing process with pid 69101 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69101' 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 69101 00:15:34.608 09:08:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 69101 00:15:34.608 [2024-11-06 09:08:10.965602] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:34.868 [2024-11-06 09:08:11.174768] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.833 09:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Hx5wK8LPlz 00:15:35.833 09:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:35.833 09:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:35.833 09:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:35.833 09:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:35.833 09:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:35.833 09:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:35.833 09:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:35.833 00:15:35.833 real 0m4.730s 00:15:35.833 user 0m5.840s 00:15:35.833 sys 0m0.618s 00:15:35.833 ************************************ 00:15:35.833 END TEST raid_read_error_test 00:15:35.833 ************************************ 00:15:35.833 09:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:35.833 09:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.833 09:08:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:15:35.833 09:08:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:35.833 09:08:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:35.833 09:08:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.833 ************************************ 00:15:35.833 START TEST raid_write_error_test 00:15:35.833 ************************************ 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.A3JxcQSf2y 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69246 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69246 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 69246 ']' 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:35.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:35.833 09:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.091 [2024-11-06 09:08:12.438102] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:15:36.091 [2024-11-06 09:08:12.438447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69246 ] 00:15:36.091 [2024-11-06 09:08:12.624704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.349 [2024-11-06 09:08:12.758906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.609 [2024-11-06 09:08:12.965672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.609 [2024-11-06 09:08:12.965756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.176 BaseBdev1_malloc 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.176 true 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.176 [2024-11-06 09:08:13.497494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:37.176 [2024-11-06 09:08:13.497576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.176 [2024-11-06 09:08:13.497609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:37.176 [2024-11-06 09:08:13.497628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.176 [2024-11-06 09:08:13.500561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.176 [2024-11-06 09:08:13.500762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:37.176 BaseBdev1 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.176 BaseBdev2_malloc 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.176 true 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.176 [2024-11-06 09:08:13.553497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:37.176 [2024-11-06 09:08:13.553570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.176 [2024-11-06 09:08:13.553600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:37.176 [2024-11-06 09:08:13.553618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.176 [2024-11-06 09:08:13.556453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.176 [2024-11-06 09:08:13.556502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:37.176 BaseBdev2 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.176 BaseBdev3_malloc 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.176 true 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.176 [2024-11-06 09:08:13.632986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:37.176 [2024-11-06 09:08:13.633073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.176 [2024-11-06 09:08:13.633104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:37.176 [2024-11-06 09:08:13.633122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.176 [2024-11-06 09:08:13.636132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.176 [2024-11-06 09:08:13.636186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:37.176 BaseBdev3 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.176 [2024-11-06 09:08:13.645195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.176 [2024-11-06 09:08:13.647714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.176 [2024-11-06 09:08:13.647995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.176 [2024-11-06 09:08:13.648305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:37.176 [2024-11-06 09:08:13.648326] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:37.176 [2024-11-06 09:08:13.648672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:37.176 [2024-11-06 09:08:13.648927] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:37.176 [2024-11-06 09:08:13.648949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:37.176 [2024-11-06 09:08:13.649233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.176 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.177 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.177 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.177 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.177 "name": "raid_bdev1", 00:15:37.177 "uuid": "0ab4990c-3666-4f26-8cb3-b479fb72f8ca", 00:15:37.177 "strip_size_kb": 0, 00:15:37.177 "state": "online", 00:15:37.177 "raid_level": "raid1", 00:15:37.177 "superblock": true, 00:15:37.177 "num_base_bdevs": 3, 00:15:37.177 "num_base_bdevs_discovered": 3, 00:15:37.177 "num_base_bdevs_operational": 3, 00:15:37.177 "base_bdevs_list": [ 00:15:37.177 { 00:15:37.177 "name": "BaseBdev1", 00:15:37.177 "uuid": "d86f172e-f3be-52c7-bd6c-7a75db83762e", 00:15:37.177 "is_configured": true, 00:15:37.177 "data_offset": 2048, 00:15:37.177 "data_size": 63488 00:15:37.177 }, 00:15:37.177 { 00:15:37.177 "name": "BaseBdev2", 00:15:37.177 "uuid": "81646974-6171-5063-9c00-0b83235df842", 00:15:37.177 "is_configured": true, 00:15:37.177 "data_offset": 2048, 00:15:37.177 "data_size": 63488 00:15:37.177 }, 00:15:37.177 { 00:15:37.177 "name": "BaseBdev3", 00:15:37.177 "uuid": "d14c3dd9-9d46-5cac-aca6-9afc569dc463", 00:15:37.177 "is_configured": true, 00:15:37.177 "data_offset": 2048, 00:15:37.177 "data_size": 63488 00:15:37.177 } 00:15:37.177 ] 00:15:37.177 }' 00:15:37.177 09:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.177 09:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.743 09:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:37.743 09:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:37.743 [2024-11-06 09:08:14.262676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:15:38.677 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:38.677 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.677 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.677 [2024-11-06 09:08:15.143474] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:15:38.677 [2024-11-06 09:08:15.143537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.677 [2024-11-06 09:08:15.143796] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:15:38.677 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.677 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:38.677 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.678 "name": "raid_bdev1", 00:15:38.678 "uuid": "0ab4990c-3666-4f26-8cb3-b479fb72f8ca", 00:15:38.678 "strip_size_kb": 0, 00:15:38.678 "state": "online", 00:15:38.678 "raid_level": "raid1", 00:15:38.678 "superblock": true, 00:15:38.678 "num_base_bdevs": 3, 00:15:38.678 "num_base_bdevs_discovered": 2, 00:15:38.678 "num_base_bdevs_operational": 2, 00:15:38.678 "base_bdevs_list": [ 00:15:38.678 { 00:15:38.678 "name": null, 00:15:38.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.678 "is_configured": false, 00:15:38.678 "data_offset": 0, 00:15:38.678 "data_size": 63488 00:15:38.678 }, 00:15:38.678 { 00:15:38.678 "name": "BaseBdev2", 00:15:38.678 "uuid": "81646974-6171-5063-9c00-0b83235df842", 00:15:38.678 "is_configured": true, 00:15:38.678 "data_offset": 2048, 00:15:38.678 "data_size": 63488 00:15:38.678 }, 00:15:38.678 { 00:15:38.678 "name": "BaseBdev3", 00:15:38.678 "uuid": "d14c3dd9-9d46-5cac-aca6-9afc569dc463", 00:15:38.678 "is_configured": true, 00:15:38.678 "data_offset": 2048, 00:15:38.678 "data_size": 63488 00:15:38.678 } 00:15:38.678 ] 00:15:38.678 }' 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.678 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.244 [2024-11-06 09:08:15.669422] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.244 [2024-11-06 09:08:15.669461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.244 [2024-11-06 09:08:15.672727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.244 [2024-11-06 09:08:15.672800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.244 [2024-11-06 09:08:15.672931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.244 [2024-11-06 09:08:15.672958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:39.244 { 00:15:39.244 "results": [ 00:15:39.244 { 00:15:39.244 "job": "raid_bdev1", 00:15:39.244 "core_mask": "0x1", 00:15:39.244 "workload": "randrw", 00:15:39.244 "percentage": 50, 00:15:39.244 "status": "finished", 00:15:39.244 "queue_depth": 1, 00:15:39.244 "io_size": 131072, 00:15:39.244 "runtime": 1.40422, 00:15:39.244 "iops": 10147.270370739629, 00:15:39.244 "mibps": 1268.4087963424536, 00:15:39.244 "io_failed": 0, 00:15:39.244 "io_timeout": 0, 00:15:39.244 "avg_latency_us": 94.3318326644932, 00:15:39.244 "min_latency_us": 43.28727272727273, 00:15:39.244 "max_latency_us": 1876.7127272727273 00:15:39.244 } 00:15:39.244 ], 00:15:39.244 "core_count": 1 00:15:39.244 } 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69246 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 69246 ']' 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 69246 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69246 00:15:39.244 killing process with pid 69246 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69246' 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 69246 00:15:39.244 [2024-11-06 09:08:15.706521] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:39.244 09:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 69246 00:15:39.502 [2024-11-06 09:08:15.914932] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:40.876 09:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:40.876 09:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.A3JxcQSf2y 00:15:40.876 09:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:40.876 09:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:40.876 09:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:40.876 09:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:40.876 09:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:40.876 09:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:40.876 00:15:40.876 real 0m4.685s 00:15:40.876 user 0m5.839s 00:15:40.876 sys 0m0.551s 00:15:40.876 09:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:40.876 09:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.876 ************************************ 00:15:40.876 END TEST raid_write_error_test 00:15:40.876 ************************************ 00:15:40.876 09:08:17 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:15:40.876 09:08:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:40.876 09:08:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:15:40.876 09:08:17 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:40.876 09:08:17 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:40.876 09:08:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:40.876 ************************************ 00:15:40.876 START TEST raid_state_function_test 00:15:40.877 ************************************ 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69390 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69390' 00:15:40.877 Process raid pid: 69390 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69390 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 69390 ']' 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:40.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:40.877 09:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.877 [2024-11-06 09:08:17.148780] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:15:40.877 [2024-11-06 09:08:17.148957] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:40.877 [2024-11-06 09:08:17.330889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.135 [2024-11-06 09:08:17.492840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.393 [2024-11-06 09:08:17.715330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.393 [2024-11-06 09:08:17.715378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.651 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:41.651 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:15:41.651 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:41.651 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.651 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.651 [2024-11-06 09:08:18.182256] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:41.651 [2024-11-06 09:08:18.182321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:41.651 [2024-11-06 09:08:18.182339] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:41.651 [2024-11-06 09:08:18.182356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:41.651 [2024-11-06 09:08:18.182367] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:41.651 [2024-11-06 09:08:18.182381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:41.651 [2024-11-06 09:08:18.182391] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:41.651 [2024-11-06 09:08:18.182406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.910 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.910 "name": "Existed_Raid", 00:15:41.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.910 "strip_size_kb": 64, 00:15:41.910 "state": "configuring", 00:15:41.910 "raid_level": "raid0", 00:15:41.910 "superblock": false, 00:15:41.910 "num_base_bdevs": 4, 00:15:41.910 "num_base_bdevs_discovered": 0, 00:15:41.910 "num_base_bdevs_operational": 4, 00:15:41.910 "base_bdevs_list": [ 00:15:41.910 { 00:15:41.910 "name": "BaseBdev1", 00:15:41.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.910 "is_configured": false, 00:15:41.910 "data_offset": 0, 00:15:41.910 "data_size": 0 00:15:41.910 }, 00:15:41.910 { 00:15:41.910 "name": "BaseBdev2", 00:15:41.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.910 "is_configured": false, 00:15:41.910 "data_offset": 0, 00:15:41.910 "data_size": 0 00:15:41.910 }, 00:15:41.910 { 00:15:41.910 "name": "BaseBdev3", 00:15:41.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.910 "is_configured": false, 00:15:41.910 "data_offset": 0, 00:15:41.910 "data_size": 0 00:15:41.910 }, 00:15:41.910 { 00:15:41.910 "name": "BaseBdev4", 00:15:41.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.910 "is_configured": false, 00:15:41.910 "data_offset": 0, 00:15:41.911 "data_size": 0 00:15:41.911 } 00:15:41.911 ] 00:15:41.911 }' 00:15:41.911 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.911 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.169 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:42.169 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.169 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.169 [2024-11-06 09:08:18.686312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.169 [2024-11-06 09:08:18.686361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:42.169 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.169 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:42.169 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.169 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.169 [2024-11-06 09:08:18.694310] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.169 [2024-11-06 09:08:18.694365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.169 [2024-11-06 09:08:18.694380] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.169 [2024-11-06 09:08:18.694397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.169 [2024-11-06 09:08:18.694407] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.169 [2024-11-06 09:08:18.694421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.169 [2024-11-06 09:08:18.694431] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:42.169 [2024-11-06 09:08:18.694445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:42.169 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.169 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.169 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.169 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.428 [2024-11-06 09:08:18.738716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.428 BaseBdev1 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.428 [ 00:15:42.428 { 00:15:42.428 "name": "BaseBdev1", 00:15:42.428 "aliases": [ 00:15:42.428 "c5cb3df7-8618-473e-9b8e-303546752607" 00:15:42.428 ], 00:15:42.428 "product_name": "Malloc disk", 00:15:42.428 "block_size": 512, 00:15:42.428 "num_blocks": 65536, 00:15:42.428 "uuid": "c5cb3df7-8618-473e-9b8e-303546752607", 00:15:42.428 "assigned_rate_limits": { 00:15:42.428 "rw_ios_per_sec": 0, 00:15:42.428 "rw_mbytes_per_sec": 0, 00:15:42.428 "r_mbytes_per_sec": 0, 00:15:42.428 "w_mbytes_per_sec": 0 00:15:42.428 }, 00:15:42.428 "claimed": true, 00:15:42.428 "claim_type": "exclusive_write", 00:15:42.428 "zoned": false, 00:15:42.428 "supported_io_types": { 00:15:42.428 "read": true, 00:15:42.428 "write": true, 00:15:42.428 "unmap": true, 00:15:42.428 "flush": true, 00:15:42.428 "reset": true, 00:15:42.428 "nvme_admin": false, 00:15:42.428 "nvme_io": false, 00:15:42.428 "nvme_io_md": false, 00:15:42.428 "write_zeroes": true, 00:15:42.428 "zcopy": true, 00:15:42.428 "get_zone_info": false, 00:15:42.428 "zone_management": false, 00:15:42.428 "zone_append": false, 00:15:42.428 "compare": false, 00:15:42.428 "compare_and_write": false, 00:15:42.428 "abort": true, 00:15:42.428 "seek_hole": false, 00:15:42.428 "seek_data": false, 00:15:42.428 "copy": true, 00:15:42.428 "nvme_iov_md": false 00:15:42.428 }, 00:15:42.428 "memory_domains": [ 00:15:42.428 { 00:15:42.428 "dma_device_id": "system", 00:15:42.428 "dma_device_type": 1 00:15:42.428 }, 00:15:42.428 { 00:15:42.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.428 "dma_device_type": 2 00:15:42.428 } 00:15:42.428 ], 00:15:42.428 "driver_specific": {} 00:15:42.428 } 00:15:42.428 ] 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.428 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.428 "name": "Existed_Raid", 00:15:42.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.428 "strip_size_kb": 64, 00:15:42.428 "state": "configuring", 00:15:42.428 "raid_level": "raid0", 00:15:42.428 "superblock": false, 00:15:42.429 "num_base_bdevs": 4, 00:15:42.429 "num_base_bdevs_discovered": 1, 00:15:42.429 "num_base_bdevs_operational": 4, 00:15:42.429 "base_bdevs_list": [ 00:15:42.429 { 00:15:42.429 "name": "BaseBdev1", 00:15:42.429 "uuid": "c5cb3df7-8618-473e-9b8e-303546752607", 00:15:42.429 "is_configured": true, 00:15:42.429 "data_offset": 0, 00:15:42.429 "data_size": 65536 00:15:42.429 }, 00:15:42.429 { 00:15:42.429 "name": "BaseBdev2", 00:15:42.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.429 "is_configured": false, 00:15:42.429 "data_offset": 0, 00:15:42.429 "data_size": 0 00:15:42.429 }, 00:15:42.429 { 00:15:42.429 "name": "BaseBdev3", 00:15:42.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.429 "is_configured": false, 00:15:42.429 "data_offset": 0, 00:15:42.429 "data_size": 0 00:15:42.429 }, 00:15:42.429 { 00:15:42.429 "name": "BaseBdev4", 00:15:42.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.429 "is_configured": false, 00:15:42.429 "data_offset": 0, 00:15:42.429 "data_size": 0 00:15:42.429 } 00:15:42.429 ] 00:15:42.429 }' 00:15:42.429 09:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.429 09:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.995 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:42.995 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.995 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.996 [2024-11-06 09:08:19.290930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.996 [2024-11-06 09:08:19.290993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.996 [2024-11-06 09:08:19.298984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.996 [2024-11-06 09:08:19.301508] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.996 [2024-11-06 09:08:19.301675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.996 [2024-11-06 09:08:19.301792] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:42.996 [2024-11-06 09:08:19.301948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:42.996 [2024-11-06 09:08:19.302083] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:42.996 [2024-11-06 09:08:19.302143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.996 "name": "Existed_Raid", 00:15:42.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.996 "strip_size_kb": 64, 00:15:42.996 "state": "configuring", 00:15:42.996 "raid_level": "raid0", 00:15:42.996 "superblock": false, 00:15:42.996 "num_base_bdevs": 4, 00:15:42.996 "num_base_bdevs_discovered": 1, 00:15:42.996 "num_base_bdevs_operational": 4, 00:15:42.996 "base_bdevs_list": [ 00:15:42.996 { 00:15:42.996 "name": "BaseBdev1", 00:15:42.996 "uuid": "c5cb3df7-8618-473e-9b8e-303546752607", 00:15:42.996 "is_configured": true, 00:15:42.996 "data_offset": 0, 00:15:42.996 "data_size": 65536 00:15:42.996 }, 00:15:42.996 { 00:15:42.996 "name": "BaseBdev2", 00:15:42.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.996 "is_configured": false, 00:15:42.996 "data_offset": 0, 00:15:42.996 "data_size": 0 00:15:42.996 }, 00:15:42.996 { 00:15:42.996 "name": "BaseBdev3", 00:15:42.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.996 "is_configured": false, 00:15:42.996 "data_offset": 0, 00:15:42.996 "data_size": 0 00:15:42.996 }, 00:15:42.996 { 00:15:42.996 "name": "BaseBdev4", 00:15:42.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.996 "is_configured": false, 00:15:42.996 "data_offset": 0, 00:15:42.996 "data_size": 0 00:15:42.996 } 00:15:42.996 ] 00:15:42.996 }' 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.996 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.563 [2024-11-06 09:08:19.861459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.563 BaseBdev2 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.563 [ 00:15:43.563 { 00:15:43.563 "name": "BaseBdev2", 00:15:43.563 "aliases": [ 00:15:43.563 "b461824f-0d55-40de-9cc0-06852149b1e5" 00:15:43.563 ], 00:15:43.563 "product_name": "Malloc disk", 00:15:43.563 "block_size": 512, 00:15:43.563 "num_blocks": 65536, 00:15:43.563 "uuid": "b461824f-0d55-40de-9cc0-06852149b1e5", 00:15:43.563 "assigned_rate_limits": { 00:15:43.563 "rw_ios_per_sec": 0, 00:15:43.563 "rw_mbytes_per_sec": 0, 00:15:43.563 "r_mbytes_per_sec": 0, 00:15:43.563 "w_mbytes_per_sec": 0 00:15:43.563 }, 00:15:43.563 "claimed": true, 00:15:43.563 "claim_type": "exclusive_write", 00:15:43.563 "zoned": false, 00:15:43.563 "supported_io_types": { 00:15:43.563 "read": true, 00:15:43.563 "write": true, 00:15:43.563 "unmap": true, 00:15:43.563 "flush": true, 00:15:43.563 "reset": true, 00:15:43.563 "nvme_admin": false, 00:15:43.563 "nvme_io": false, 00:15:43.563 "nvme_io_md": false, 00:15:43.563 "write_zeroes": true, 00:15:43.563 "zcopy": true, 00:15:43.563 "get_zone_info": false, 00:15:43.563 "zone_management": false, 00:15:43.563 "zone_append": false, 00:15:43.563 "compare": false, 00:15:43.563 "compare_and_write": false, 00:15:43.563 "abort": true, 00:15:43.563 "seek_hole": false, 00:15:43.563 "seek_data": false, 00:15:43.563 "copy": true, 00:15:43.563 "nvme_iov_md": false 00:15:43.563 }, 00:15:43.563 "memory_domains": [ 00:15:43.563 { 00:15:43.563 "dma_device_id": "system", 00:15:43.563 "dma_device_type": 1 00:15:43.563 }, 00:15:43.563 { 00:15:43.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.563 "dma_device_type": 2 00:15:43.563 } 00:15:43.563 ], 00:15:43.563 "driver_specific": {} 00:15:43.563 } 00:15:43.563 ] 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.563 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.564 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.564 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.564 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.564 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.564 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.564 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.564 "name": "Existed_Raid", 00:15:43.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.564 "strip_size_kb": 64, 00:15:43.564 "state": "configuring", 00:15:43.564 "raid_level": "raid0", 00:15:43.564 "superblock": false, 00:15:43.564 "num_base_bdevs": 4, 00:15:43.564 "num_base_bdevs_discovered": 2, 00:15:43.564 "num_base_bdevs_operational": 4, 00:15:43.564 "base_bdevs_list": [ 00:15:43.564 { 00:15:43.564 "name": "BaseBdev1", 00:15:43.564 "uuid": "c5cb3df7-8618-473e-9b8e-303546752607", 00:15:43.564 "is_configured": true, 00:15:43.564 "data_offset": 0, 00:15:43.564 "data_size": 65536 00:15:43.564 }, 00:15:43.564 { 00:15:43.564 "name": "BaseBdev2", 00:15:43.564 "uuid": "b461824f-0d55-40de-9cc0-06852149b1e5", 00:15:43.564 "is_configured": true, 00:15:43.564 "data_offset": 0, 00:15:43.564 "data_size": 65536 00:15:43.564 }, 00:15:43.564 { 00:15:43.564 "name": "BaseBdev3", 00:15:43.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.564 "is_configured": false, 00:15:43.564 "data_offset": 0, 00:15:43.564 "data_size": 0 00:15:43.564 }, 00:15:43.564 { 00:15:43.564 "name": "BaseBdev4", 00:15:43.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.564 "is_configured": false, 00:15:43.564 "data_offset": 0, 00:15:43.564 "data_size": 0 00:15:43.564 } 00:15:43.564 ] 00:15:43.564 }' 00:15:43.564 09:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.564 09:08:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.129 [2024-11-06 09:08:20.480223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.129 BaseBdev3 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.129 [ 00:15:44.129 { 00:15:44.129 "name": "BaseBdev3", 00:15:44.129 "aliases": [ 00:15:44.129 "8597ce2d-04c6-46bf-bb24-a242dfed7edc" 00:15:44.129 ], 00:15:44.129 "product_name": "Malloc disk", 00:15:44.129 "block_size": 512, 00:15:44.129 "num_blocks": 65536, 00:15:44.129 "uuid": "8597ce2d-04c6-46bf-bb24-a242dfed7edc", 00:15:44.129 "assigned_rate_limits": { 00:15:44.129 "rw_ios_per_sec": 0, 00:15:44.129 "rw_mbytes_per_sec": 0, 00:15:44.129 "r_mbytes_per_sec": 0, 00:15:44.129 "w_mbytes_per_sec": 0 00:15:44.129 }, 00:15:44.129 "claimed": true, 00:15:44.129 "claim_type": "exclusive_write", 00:15:44.129 "zoned": false, 00:15:44.129 "supported_io_types": { 00:15:44.129 "read": true, 00:15:44.129 "write": true, 00:15:44.129 "unmap": true, 00:15:44.129 "flush": true, 00:15:44.129 "reset": true, 00:15:44.129 "nvme_admin": false, 00:15:44.129 "nvme_io": false, 00:15:44.129 "nvme_io_md": false, 00:15:44.129 "write_zeroes": true, 00:15:44.129 "zcopy": true, 00:15:44.129 "get_zone_info": false, 00:15:44.129 "zone_management": false, 00:15:44.129 "zone_append": false, 00:15:44.129 "compare": false, 00:15:44.129 "compare_and_write": false, 00:15:44.129 "abort": true, 00:15:44.129 "seek_hole": false, 00:15:44.129 "seek_data": false, 00:15:44.129 "copy": true, 00:15:44.129 "nvme_iov_md": false 00:15:44.129 }, 00:15:44.129 "memory_domains": [ 00:15:44.129 { 00:15:44.129 "dma_device_id": "system", 00:15:44.129 "dma_device_type": 1 00:15:44.129 }, 00:15:44.129 { 00:15:44.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.129 "dma_device_type": 2 00:15:44.129 } 00:15:44.129 ], 00:15:44.129 "driver_specific": {} 00:15:44.129 } 00:15:44.129 ] 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:44.129 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.130 "name": "Existed_Raid", 00:15:44.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.130 "strip_size_kb": 64, 00:15:44.130 "state": "configuring", 00:15:44.130 "raid_level": "raid0", 00:15:44.130 "superblock": false, 00:15:44.130 "num_base_bdevs": 4, 00:15:44.130 "num_base_bdevs_discovered": 3, 00:15:44.130 "num_base_bdevs_operational": 4, 00:15:44.130 "base_bdevs_list": [ 00:15:44.130 { 00:15:44.130 "name": "BaseBdev1", 00:15:44.130 "uuid": "c5cb3df7-8618-473e-9b8e-303546752607", 00:15:44.130 "is_configured": true, 00:15:44.130 "data_offset": 0, 00:15:44.130 "data_size": 65536 00:15:44.130 }, 00:15:44.130 { 00:15:44.130 "name": "BaseBdev2", 00:15:44.130 "uuid": "b461824f-0d55-40de-9cc0-06852149b1e5", 00:15:44.130 "is_configured": true, 00:15:44.130 "data_offset": 0, 00:15:44.130 "data_size": 65536 00:15:44.130 }, 00:15:44.130 { 00:15:44.130 "name": "BaseBdev3", 00:15:44.130 "uuid": "8597ce2d-04c6-46bf-bb24-a242dfed7edc", 00:15:44.130 "is_configured": true, 00:15:44.130 "data_offset": 0, 00:15:44.130 "data_size": 65536 00:15:44.130 }, 00:15:44.130 { 00:15:44.130 "name": "BaseBdev4", 00:15:44.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.130 "is_configured": false, 00:15:44.130 "data_offset": 0, 00:15:44.130 "data_size": 0 00:15:44.130 } 00:15:44.130 ] 00:15:44.130 }' 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.130 09:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.696 [2024-11-06 09:08:21.070631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:44.696 [2024-11-06 09:08:21.070691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:44.696 [2024-11-06 09:08:21.070705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:44.696 [2024-11-06 09:08:21.071100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:44.696 [2024-11-06 09:08:21.071313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:44.696 [2024-11-06 09:08:21.071354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:44.696 [2024-11-06 09:08:21.071660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.696 BaseBdev4 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.696 [ 00:15:44.696 { 00:15:44.696 "name": "BaseBdev4", 00:15:44.696 "aliases": [ 00:15:44.696 "046b5775-2b01-45ee-a9b6-6053b7125f6e" 00:15:44.696 ], 00:15:44.696 "product_name": "Malloc disk", 00:15:44.696 "block_size": 512, 00:15:44.696 "num_blocks": 65536, 00:15:44.696 "uuid": "046b5775-2b01-45ee-a9b6-6053b7125f6e", 00:15:44.696 "assigned_rate_limits": { 00:15:44.696 "rw_ios_per_sec": 0, 00:15:44.696 "rw_mbytes_per_sec": 0, 00:15:44.696 "r_mbytes_per_sec": 0, 00:15:44.696 "w_mbytes_per_sec": 0 00:15:44.696 }, 00:15:44.696 "claimed": true, 00:15:44.696 "claim_type": "exclusive_write", 00:15:44.696 "zoned": false, 00:15:44.696 "supported_io_types": { 00:15:44.696 "read": true, 00:15:44.696 "write": true, 00:15:44.696 "unmap": true, 00:15:44.696 "flush": true, 00:15:44.696 "reset": true, 00:15:44.696 "nvme_admin": false, 00:15:44.696 "nvme_io": false, 00:15:44.696 "nvme_io_md": false, 00:15:44.696 "write_zeroes": true, 00:15:44.696 "zcopy": true, 00:15:44.696 "get_zone_info": false, 00:15:44.696 "zone_management": false, 00:15:44.696 "zone_append": false, 00:15:44.696 "compare": false, 00:15:44.696 "compare_and_write": false, 00:15:44.696 "abort": true, 00:15:44.696 "seek_hole": false, 00:15:44.696 "seek_data": false, 00:15:44.696 "copy": true, 00:15:44.696 "nvme_iov_md": false 00:15:44.696 }, 00:15:44.696 "memory_domains": [ 00:15:44.696 { 00:15:44.696 "dma_device_id": "system", 00:15:44.696 "dma_device_type": 1 00:15:44.696 }, 00:15:44.696 { 00:15:44.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.696 "dma_device_type": 2 00:15:44.696 } 00:15:44.696 ], 00:15:44.696 "driver_specific": {} 00:15:44.696 } 00:15:44.696 ] 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.696 "name": "Existed_Raid", 00:15:44.696 "uuid": "ecd90f81-709b-4ca4-b879-233fcbc47ad7", 00:15:44.696 "strip_size_kb": 64, 00:15:44.696 "state": "online", 00:15:44.696 "raid_level": "raid0", 00:15:44.696 "superblock": false, 00:15:44.696 "num_base_bdevs": 4, 00:15:44.696 "num_base_bdevs_discovered": 4, 00:15:44.696 "num_base_bdevs_operational": 4, 00:15:44.696 "base_bdevs_list": [ 00:15:44.696 { 00:15:44.696 "name": "BaseBdev1", 00:15:44.696 "uuid": "c5cb3df7-8618-473e-9b8e-303546752607", 00:15:44.696 "is_configured": true, 00:15:44.696 "data_offset": 0, 00:15:44.696 "data_size": 65536 00:15:44.696 }, 00:15:44.696 { 00:15:44.696 "name": "BaseBdev2", 00:15:44.696 "uuid": "b461824f-0d55-40de-9cc0-06852149b1e5", 00:15:44.696 "is_configured": true, 00:15:44.696 "data_offset": 0, 00:15:44.696 "data_size": 65536 00:15:44.696 }, 00:15:44.696 { 00:15:44.696 "name": "BaseBdev3", 00:15:44.696 "uuid": "8597ce2d-04c6-46bf-bb24-a242dfed7edc", 00:15:44.696 "is_configured": true, 00:15:44.696 "data_offset": 0, 00:15:44.696 "data_size": 65536 00:15:44.696 }, 00:15:44.696 { 00:15:44.696 "name": "BaseBdev4", 00:15:44.696 "uuid": "046b5775-2b01-45ee-a9b6-6053b7125f6e", 00:15:44.696 "is_configured": true, 00:15:44.696 "data_offset": 0, 00:15:44.696 "data_size": 65536 00:15:44.696 } 00:15:44.696 ] 00:15:44.696 }' 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.696 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.263 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:45.263 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:45.263 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.263 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.263 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.263 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.264 [2024-11-06 09:08:21.623308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.264 "name": "Existed_Raid", 00:15:45.264 "aliases": [ 00:15:45.264 "ecd90f81-709b-4ca4-b879-233fcbc47ad7" 00:15:45.264 ], 00:15:45.264 "product_name": "Raid Volume", 00:15:45.264 "block_size": 512, 00:15:45.264 "num_blocks": 262144, 00:15:45.264 "uuid": "ecd90f81-709b-4ca4-b879-233fcbc47ad7", 00:15:45.264 "assigned_rate_limits": { 00:15:45.264 "rw_ios_per_sec": 0, 00:15:45.264 "rw_mbytes_per_sec": 0, 00:15:45.264 "r_mbytes_per_sec": 0, 00:15:45.264 "w_mbytes_per_sec": 0 00:15:45.264 }, 00:15:45.264 "claimed": false, 00:15:45.264 "zoned": false, 00:15:45.264 "supported_io_types": { 00:15:45.264 "read": true, 00:15:45.264 "write": true, 00:15:45.264 "unmap": true, 00:15:45.264 "flush": true, 00:15:45.264 "reset": true, 00:15:45.264 "nvme_admin": false, 00:15:45.264 "nvme_io": false, 00:15:45.264 "nvme_io_md": false, 00:15:45.264 "write_zeroes": true, 00:15:45.264 "zcopy": false, 00:15:45.264 "get_zone_info": false, 00:15:45.264 "zone_management": false, 00:15:45.264 "zone_append": false, 00:15:45.264 "compare": false, 00:15:45.264 "compare_and_write": false, 00:15:45.264 "abort": false, 00:15:45.264 "seek_hole": false, 00:15:45.264 "seek_data": false, 00:15:45.264 "copy": false, 00:15:45.264 "nvme_iov_md": false 00:15:45.264 }, 00:15:45.264 "memory_domains": [ 00:15:45.264 { 00:15:45.264 "dma_device_id": "system", 00:15:45.264 "dma_device_type": 1 00:15:45.264 }, 00:15:45.264 { 00:15:45.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.264 "dma_device_type": 2 00:15:45.264 }, 00:15:45.264 { 00:15:45.264 "dma_device_id": "system", 00:15:45.264 "dma_device_type": 1 00:15:45.264 }, 00:15:45.264 { 00:15:45.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.264 "dma_device_type": 2 00:15:45.264 }, 00:15:45.264 { 00:15:45.264 "dma_device_id": "system", 00:15:45.264 "dma_device_type": 1 00:15:45.264 }, 00:15:45.264 { 00:15:45.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.264 "dma_device_type": 2 00:15:45.264 }, 00:15:45.264 { 00:15:45.264 "dma_device_id": "system", 00:15:45.264 "dma_device_type": 1 00:15:45.264 }, 00:15:45.264 { 00:15:45.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.264 "dma_device_type": 2 00:15:45.264 } 00:15:45.264 ], 00:15:45.264 "driver_specific": { 00:15:45.264 "raid": { 00:15:45.264 "uuid": "ecd90f81-709b-4ca4-b879-233fcbc47ad7", 00:15:45.264 "strip_size_kb": 64, 00:15:45.264 "state": "online", 00:15:45.264 "raid_level": "raid0", 00:15:45.264 "superblock": false, 00:15:45.264 "num_base_bdevs": 4, 00:15:45.264 "num_base_bdevs_discovered": 4, 00:15:45.264 "num_base_bdevs_operational": 4, 00:15:45.264 "base_bdevs_list": [ 00:15:45.264 { 00:15:45.264 "name": "BaseBdev1", 00:15:45.264 "uuid": "c5cb3df7-8618-473e-9b8e-303546752607", 00:15:45.264 "is_configured": true, 00:15:45.264 "data_offset": 0, 00:15:45.264 "data_size": 65536 00:15:45.264 }, 00:15:45.264 { 00:15:45.264 "name": "BaseBdev2", 00:15:45.264 "uuid": "b461824f-0d55-40de-9cc0-06852149b1e5", 00:15:45.264 "is_configured": true, 00:15:45.264 "data_offset": 0, 00:15:45.264 "data_size": 65536 00:15:45.264 }, 00:15:45.264 { 00:15:45.264 "name": "BaseBdev3", 00:15:45.264 "uuid": "8597ce2d-04c6-46bf-bb24-a242dfed7edc", 00:15:45.264 "is_configured": true, 00:15:45.264 "data_offset": 0, 00:15:45.264 "data_size": 65536 00:15:45.264 }, 00:15:45.264 { 00:15:45.264 "name": "BaseBdev4", 00:15:45.264 "uuid": "046b5775-2b01-45ee-a9b6-6053b7125f6e", 00:15:45.264 "is_configured": true, 00:15:45.264 "data_offset": 0, 00:15:45.264 "data_size": 65536 00:15:45.264 } 00:15:45.264 ] 00:15:45.264 } 00:15:45.264 } 00:15:45.264 }' 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:45.264 BaseBdev2 00:15:45.264 BaseBdev3 00:15:45.264 BaseBdev4' 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.264 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.523 09:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.523 [2024-11-06 09:08:21.975040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:45.523 [2024-11-06 09:08:21.975232] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.523 [2024-11-06 09:08:21.975318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.782 "name": "Existed_Raid", 00:15:45.782 "uuid": "ecd90f81-709b-4ca4-b879-233fcbc47ad7", 00:15:45.782 "strip_size_kb": 64, 00:15:45.782 "state": "offline", 00:15:45.782 "raid_level": "raid0", 00:15:45.782 "superblock": false, 00:15:45.782 "num_base_bdevs": 4, 00:15:45.782 "num_base_bdevs_discovered": 3, 00:15:45.782 "num_base_bdevs_operational": 3, 00:15:45.782 "base_bdevs_list": [ 00:15:45.782 { 00:15:45.782 "name": null, 00:15:45.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.782 "is_configured": false, 00:15:45.782 "data_offset": 0, 00:15:45.782 "data_size": 65536 00:15:45.782 }, 00:15:45.782 { 00:15:45.782 "name": "BaseBdev2", 00:15:45.782 "uuid": "b461824f-0d55-40de-9cc0-06852149b1e5", 00:15:45.782 "is_configured": true, 00:15:45.782 "data_offset": 0, 00:15:45.782 "data_size": 65536 00:15:45.782 }, 00:15:45.782 { 00:15:45.782 "name": "BaseBdev3", 00:15:45.782 "uuid": "8597ce2d-04c6-46bf-bb24-a242dfed7edc", 00:15:45.782 "is_configured": true, 00:15:45.782 "data_offset": 0, 00:15:45.782 "data_size": 65536 00:15:45.782 }, 00:15:45.782 { 00:15:45.782 "name": "BaseBdev4", 00:15:45.782 "uuid": "046b5775-2b01-45ee-a9b6-6053b7125f6e", 00:15:45.782 "is_configured": true, 00:15:45.782 "data_offset": 0, 00:15:45.782 "data_size": 65536 00:15:45.782 } 00:15:45.782 ] 00:15:45.782 }' 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.782 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.351 [2024-11-06 09:08:22.653367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.351 [2024-11-06 09:08:22.791662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.351 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.610 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.610 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:46.610 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:46.610 09:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:46.610 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.610 09:08:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.610 [2024-11-06 09:08:22.923224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:46.610 [2024-11-06 09:08:22.923282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.610 BaseBdev2 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.610 [ 00:15:46.610 { 00:15:46.610 "name": "BaseBdev2", 00:15:46.610 "aliases": [ 00:15:46.610 "c77a1136-366e-4495-a778-367fcd9b670f" 00:15:46.610 ], 00:15:46.610 "product_name": "Malloc disk", 00:15:46.610 "block_size": 512, 00:15:46.610 "num_blocks": 65536, 00:15:46.610 "uuid": "c77a1136-366e-4495-a778-367fcd9b670f", 00:15:46.610 "assigned_rate_limits": { 00:15:46.610 "rw_ios_per_sec": 0, 00:15:46.610 "rw_mbytes_per_sec": 0, 00:15:46.610 "r_mbytes_per_sec": 0, 00:15:46.610 "w_mbytes_per_sec": 0 00:15:46.610 }, 00:15:46.610 "claimed": false, 00:15:46.610 "zoned": false, 00:15:46.610 "supported_io_types": { 00:15:46.610 "read": true, 00:15:46.610 "write": true, 00:15:46.610 "unmap": true, 00:15:46.610 "flush": true, 00:15:46.610 "reset": true, 00:15:46.610 "nvme_admin": false, 00:15:46.610 "nvme_io": false, 00:15:46.610 "nvme_io_md": false, 00:15:46.610 "write_zeroes": true, 00:15:46.610 "zcopy": true, 00:15:46.610 "get_zone_info": false, 00:15:46.610 "zone_management": false, 00:15:46.610 "zone_append": false, 00:15:46.610 "compare": false, 00:15:46.610 "compare_and_write": false, 00:15:46.610 "abort": true, 00:15:46.610 "seek_hole": false, 00:15:46.610 "seek_data": false, 00:15:46.610 "copy": true, 00:15:46.610 "nvme_iov_md": false 00:15:46.610 }, 00:15:46.610 "memory_domains": [ 00:15:46.610 { 00:15:46.610 "dma_device_id": "system", 00:15:46.610 "dma_device_type": 1 00:15:46.610 }, 00:15:46.610 { 00:15:46.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.610 "dma_device_type": 2 00:15:46.610 } 00:15:46.610 ], 00:15:46.610 "driver_specific": {} 00:15:46.610 } 00:15:46.610 ] 00:15:46.610 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.611 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:46.611 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.611 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.611 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:46.611 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.611 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.869 BaseBdev3 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.870 [ 00:15:46.870 { 00:15:46.870 "name": "BaseBdev3", 00:15:46.870 "aliases": [ 00:15:46.870 "55114b23-0b1e-4330-8d07-92474cab080f" 00:15:46.870 ], 00:15:46.870 "product_name": "Malloc disk", 00:15:46.870 "block_size": 512, 00:15:46.870 "num_blocks": 65536, 00:15:46.870 "uuid": "55114b23-0b1e-4330-8d07-92474cab080f", 00:15:46.870 "assigned_rate_limits": { 00:15:46.870 "rw_ios_per_sec": 0, 00:15:46.870 "rw_mbytes_per_sec": 0, 00:15:46.870 "r_mbytes_per_sec": 0, 00:15:46.870 "w_mbytes_per_sec": 0 00:15:46.870 }, 00:15:46.870 "claimed": false, 00:15:46.870 "zoned": false, 00:15:46.870 "supported_io_types": { 00:15:46.870 "read": true, 00:15:46.870 "write": true, 00:15:46.870 "unmap": true, 00:15:46.870 "flush": true, 00:15:46.870 "reset": true, 00:15:46.870 "nvme_admin": false, 00:15:46.870 "nvme_io": false, 00:15:46.870 "nvme_io_md": false, 00:15:46.870 "write_zeroes": true, 00:15:46.870 "zcopy": true, 00:15:46.870 "get_zone_info": false, 00:15:46.870 "zone_management": false, 00:15:46.870 "zone_append": false, 00:15:46.870 "compare": false, 00:15:46.870 "compare_and_write": false, 00:15:46.870 "abort": true, 00:15:46.870 "seek_hole": false, 00:15:46.870 "seek_data": false, 00:15:46.870 "copy": true, 00:15:46.870 "nvme_iov_md": false 00:15:46.870 }, 00:15:46.870 "memory_domains": [ 00:15:46.870 { 00:15:46.870 "dma_device_id": "system", 00:15:46.870 "dma_device_type": 1 00:15:46.870 }, 00:15:46.870 { 00:15:46.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.870 "dma_device_type": 2 00:15:46.870 } 00:15:46.870 ], 00:15:46.870 "driver_specific": {} 00:15:46.870 } 00:15:46.870 ] 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.870 BaseBdev4 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.870 [ 00:15:46.870 { 00:15:46.870 "name": "BaseBdev4", 00:15:46.870 "aliases": [ 00:15:46.870 "6961865f-a76f-4e80-a435-3bff7dbd4530" 00:15:46.870 ], 00:15:46.870 "product_name": "Malloc disk", 00:15:46.870 "block_size": 512, 00:15:46.870 "num_blocks": 65536, 00:15:46.870 "uuid": "6961865f-a76f-4e80-a435-3bff7dbd4530", 00:15:46.870 "assigned_rate_limits": { 00:15:46.870 "rw_ios_per_sec": 0, 00:15:46.870 "rw_mbytes_per_sec": 0, 00:15:46.870 "r_mbytes_per_sec": 0, 00:15:46.870 "w_mbytes_per_sec": 0 00:15:46.870 }, 00:15:46.870 "claimed": false, 00:15:46.870 "zoned": false, 00:15:46.870 "supported_io_types": { 00:15:46.870 "read": true, 00:15:46.870 "write": true, 00:15:46.870 "unmap": true, 00:15:46.870 "flush": true, 00:15:46.870 "reset": true, 00:15:46.870 "nvme_admin": false, 00:15:46.870 "nvme_io": false, 00:15:46.870 "nvme_io_md": false, 00:15:46.870 "write_zeroes": true, 00:15:46.870 "zcopy": true, 00:15:46.870 "get_zone_info": false, 00:15:46.870 "zone_management": false, 00:15:46.870 "zone_append": false, 00:15:46.870 "compare": false, 00:15:46.870 "compare_and_write": false, 00:15:46.870 "abort": true, 00:15:46.870 "seek_hole": false, 00:15:46.870 "seek_data": false, 00:15:46.870 "copy": true, 00:15:46.870 "nvme_iov_md": false 00:15:46.870 }, 00:15:46.870 "memory_domains": [ 00:15:46.870 { 00:15:46.870 "dma_device_id": "system", 00:15:46.870 "dma_device_type": 1 00:15:46.870 }, 00:15:46.870 { 00:15:46.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.870 "dma_device_type": 2 00:15:46.870 } 00:15:46.870 ], 00:15:46.870 "driver_specific": {} 00:15:46.870 } 00:15:46.870 ] 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.870 [2024-11-06 09:08:23.288587] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.870 [2024-11-06 09:08:23.288767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.870 [2024-11-06 09:08:23.288831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.870 [2024-11-06 09:08:23.291215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.870 [2024-11-06 09:08:23.291287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.870 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.870 "name": "Existed_Raid", 00:15:46.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.870 "strip_size_kb": 64, 00:15:46.871 "state": "configuring", 00:15:46.871 "raid_level": "raid0", 00:15:46.871 "superblock": false, 00:15:46.871 "num_base_bdevs": 4, 00:15:46.871 "num_base_bdevs_discovered": 3, 00:15:46.871 "num_base_bdevs_operational": 4, 00:15:46.871 "base_bdevs_list": [ 00:15:46.871 { 00:15:46.871 "name": "BaseBdev1", 00:15:46.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.871 "is_configured": false, 00:15:46.871 "data_offset": 0, 00:15:46.871 "data_size": 0 00:15:46.871 }, 00:15:46.871 { 00:15:46.871 "name": "BaseBdev2", 00:15:46.871 "uuid": "c77a1136-366e-4495-a778-367fcd9b670f", 00:15:46.871 "is_configured": true, 00:15:46.871 "data_offset": 0, 00:15:46.871 "data_size": 65536 00:15:46.871 }, 00:15:46.871 { 00:15:46.871 "name": "BaseBdev3", 00:15:46.871 "uuid": "55114b23-0b1e-4330-8d07-92474cab080f", 00:15:46.871 "is_configured": true, 00:15:46.871 "data_offset": 0, 00:15:46.871 "data_size": 65536 00:15:46.871 }, 00:15:46.871 { 00:15:46.871 "name": "BaseBdev4", 00:15:46.871 "uuid": "6961865f-a76f-4e80-a435-3bff7dbd4530", 00:15:46.871 "is_configured": true, 00:15:46.871 "data_offset": 0, 00:15:46.871 "data_size": 65536 00:15:46.871 } 00:15:46.871 ] 00:15:46.871 }' 00:15:46.871 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.871 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.438 [2024-11-06 09:08:23.808722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.438 "name": "Existed_Raid", 00:15:47.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.438 "strip_size_kb": 64, 00:15:47.438 "state": "configuring", 00:15:47.438 "raid_level": "raid0", 00:15:47.438 "superblock": false, 00:15:47.438 "num_base_bdevs": 4, 00:15:47.438 "num_base_bdevs_discovered": 2, 00:15:47.438 "num_base_bdevs_operational": 4, 00:15:47.438 "base_bdevs_list": [ 00:15:47.438 { 00:15:47.438 "name": "BaseBdev1", 00:15:47.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.438 "is_configured": false, 00:15:47.438 "data_offset": 0, 00:15:47.438 "data_size": 0 00:15:47.438 }, 00:15:47.438 { 00:15:47.438 "name": null, 00:15:47.438 "uuid": "c77a1136-366e-4495-a778-367fcd9b670f", 00:15:47.438 "is_configured": false, 00:15:47.438 "data_offset": 0, 00:15:47.438 "data_size": 65536 00:15:47.438 }, 00:15:47.438 { 00:15:47.438 "name": "BaseBdev3", 00:15:47.438 "uuid": "55114b23-0b1e-4330-8d07-92474cab080f", 00:15:47.438 "is_configured": true, 00:15:47.438 "data_offset": 0, 00:15:47.438 "data_size": 65536 00:15:47.438 }, 00:15:47.438 { 00:15:47.438 "name": "BaseBdev4", 00:15:47.438 "uuid": "6961865f-a76f-4e80-a435-3bff7dbd4530", 00:15:47.438 "is_configured": true, 00:15:47.438 "data_offset": 0, 00:15:47.438 "data_size": 65536 00:15:47.438 } 00:15:47.438 ] 00:15:47.438 }' 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.438 09:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.015 [2024-11-06 09:08:24.422526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.015 BaseBdev1 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.015 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.015 [ 00:15:48.015 { 00:15:48.015 "name": "BaseBdev1", 00:15:48.015 "aliases": [ 00:15:48.015 "daa3cd55-09be-472e-b165-6ce3f59b7c14" 00:15:48.015 ], 00:15:48.015 "product_name": "Malloc disk", 00:15:48.015 "block_size": 512, 00:15:48.015 "num_blocks": 65536, 00:15:48.015 "uuid": "daa3cd55-09be-472e-b165-6ce3f59b7c14", 00:15:48.015 "assigned_rate_limits": { 00:15:48.015 "rw_ios_per_sec": 0, 00:15:48.015 "rw_mbytes_per_sec": 0, 00:15:48.015 "r_mbytes_per_sec": 0, 00:15:48.015 "w_mbytes_per_sec": 0 00:15:48.015 }, 00:15:48.015 "claimed": true, 00:15:48.015 "claim_type": "exclusive_write", 00:15:48.015 "zoned": false, 00:15:48.015 "supported_io_types": { 00:15:48.015 "read": true, 00:15:48.015 "write": true, 00:15:48.015 "unmap": true, 00:15:48.015 "flush": true, 00:15:48.015 "reset": true, 00:15:48.015 "nvme_admin": false, 00:15:48.015 "nvme_io": false, 00:15:48.015 "nvme_io_md": false, 00:15:48.015 "write_zeroes": true, 00:15:48.015 "zcopy": true, 00:15:48.015 "get_zone_info": false, 00:15:48.015 "zone_management": false, 00:15:48.015 "zone_append": false, 00:15:48.015 "compare": false, 00:15:48.015 "compare_and_write": false, 00:15:48.015 "abort": true, 00:15:48.015 "seek_hole": false, 00:15:48.015 "seek_data": false, 00:15:48.015 "copy": true, 00:15:48.015 "nvme_iov_md": false 00:15:48.015 }, 00:15:48.015 "memory_domains": [ 00:15:48.015 { 00:15:48.015 "dma_device_id": "system", 00:15:48.015 "dma_device_type": 1 00:15:48.015 }, 00:15:48.015 { 00:15:48.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.015 "dma_device_type": 2 00:15:48.015 } 00:15:48.015 ], 00:15:48.015 "driver_specific": {} 00:15:48.015 } 00:15:48.015 ] 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.016 "name": "Existed_Raid", 00:15:48.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.016 "strip_size_kb": 64, 00:15:48.016 "state": "configuring", 00:15:48.016 "raid_level": "raid0", 00:15:48.016 "superblock": false, 00:15:48.016 "num_base_bdevs": 4, 00:15:48.016 "num_base_bdevs_discovered": 3, 00:15:48.016 "num_base_bdevs_operational": 4, 00:15:48.016 "base_bdevs_list": [ 00:15:48.016 { 00:15:48.016 "name": "BaseBdev1", 00:15:48.016 "uuid": "daa3cd55-09be-472e-b165-6ce3f59b7c14", 00:15:48.016 "is_configured": true, 00:15:48.016 "data_offset": 0, 00:15:48.016 "data_size": 65536 00:15:48.016 }, 00:15:48.016 { 00:15:48.016 "name": null, 00:15:48.016 "uuid": "c77a1136-366e-4495-a778-367fcd9b670f", 00:15:48.016 "is_configured": false, 00:15:48.016 "data_offset": 0, 00:15:48.016 "data_size": 65536 00:15:48.016 }, 00:15:48.016 { 00:15:48.016 "name": "BaseBdev3", 00:15:48.016 "uuid": "55114b23-0b1e-4330-8d07-92474cab080f", 00:15:48.016 "is_configured": true, 00:15:48.016 "data_offset": 0, 00:15:48.016 "data_size": 65536 00:15:48.016 }, 00:15:48.016 { 00:15:48.016 "name": "BaseBdev4", 00:15:48.016 "uuid": "6961865f-a76f-4e80-a435-3bff7dbd4530", 00:15:48.016 "is_configured": true, 00:15:48.016 "data_offset": 0, 00:15:48.016 "data_size": 65536 00:15:48.016 } 00:15:48.016 ] 00:15:48.016 }' 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.016 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.648 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.648 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.648 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.648 09:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:48.648 09:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.648 [2024-11-06 09:08:25.026796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.648 "name": "Existed_Raid", 00:15:48.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.648 "strip_size_kb": 64, 00:15:48.648 "state": "configuring", 00:15:48.648 "raid_level": "raid0", 00:15:48.648 "superblock": false, 00:15:48.648 "num_base_bdevs": 4, 00:15:48.648 "num_base_bdevs_discovered": 2, 00:15:48.648 "num_base_bdevs_operational": 4, 00:15:48.648 "base_bdevs_list": [ 00:15:48.648 { 00:15:48.648 "name": "BaseBdev1", 00:15:48.648 "uuid": "daa3cd55-09be-472e-b165-6ce3f59b7c14", 00:15:48.648 "is_configured": true, 00:15:48.648 "data_offset": 0, 00:15:48.648 "data_size": 65536 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "name": null, 00:15:48.648 "uuid": "c77a1136-366e-4495-a778-367fcd9b670f", 00:15:48.648 "is_configured": false, 00:15:48.648 "data_offset": 0, 00:15:48.648 "data_size": 65536 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "name": null, 00:15:48.648 "uuid": "55114b23-0b1e-4330-8d07-92474cab080f", 00:15:48.648 "is_configured": false, 00:15:48.648 "data_offset": 0, 00:15:48.648 "data_size": 65536 00:15:48.648 }, 00:15:48.648 { 00:15:48.648 "name": "BaseBdev4", 00:15:48.648 "uuid": "6961865f-a76f-4e80-a435-3bff7dbd4530", 00:15:48.648 "is_configured": true, 00:15:48.648 "data_offset": 0, 00:15:48.648 "data_size": 65536 00:15:48.648 } 00:15:48.648 ] 00:15:48.648 }' 00:15:48.648 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.649 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.216 [2024-11-06 09:08:25.614945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.216 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.216 "name": "Existed_Raid", 00:15:49.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.216 "strip_size_kb": 64, 00:15:49.216 "state": "configuring", 00:15:49.217 "raid_level": "raid0", 00:15:49.217 "superblock": false, 00:15:49.217 "num_base_bdevs": 4, 00:15:49.217 "num_base_bdevs_discovered": 3, 00:15:49.217 "num_base_bdevs_operational": 4, 00:15:49.217 "base_bdevs_list": [ 00:15:49.217 { 00:15:49.217 "name": "BaseBdev1", 00:15:49.217 "uuid": "daa3cd55-09be-472e-b165-6ce3f59b7c14", 00:15:49.217 "is_configured": true, 00:15:49.217 "data_offset": 0, 00:15:49.217 "data_size": 65536 00:15:49.217 }, 00:15:49.217 { 00:15:49.217 "name": null, 00:15:49.217 "uuid": "c77a1136-366e-4495-a778-367fcd9b670f", 00:15:49.217 "is_configured": false, 00:15:49.217 "data_offset": 0, 00:15:49.217 "data_size": 65536 00:15:49.217 }, 00:15:49.217 { 00:15:49.217 "name": "BaseBdev3", 00:15:49.217 "uuid": "55114b23-0b1e-4330-8d07-92474cab080f", 00:15:49.217 "is_configured": true, 00:15:49.217 "data_offset": 0, 00:15:49.217 "data_size": 65536 00:15:49.217 }, 00:15:49.217 { 00:15:49.217 "name": "BaseBdev4", 00:15:49.217 "uuid": "6961865f-a76f-4e80-a435-3bff7dbd4530", 00:15:49.217 "is_configured": true, 00:15:49.217 "data_offset": 0, 00:15:49.217 "data_size": 65536 00:15:49.217 } 00:15:49.217 ] 00:15:49.217 }' 00:15:49.217 09:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.217 09:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.784 [2024-11-06 09:08:26.179146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.784 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.043 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.043 "name": "Existed_Raid", 00:15:50.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.043 "strip_size_kb": 64, 00:15:50.043 "state": "configuring", 00:15:50.043 "raid_level": "raid0", 00:15:50.043 "superblock": false, 00:15:50.043 "num_base_bdevs": 4, 00:15:50.043 "num_base_bdevs_discovered": 2, 00:15:50.043 "num_base_bdevs_operational": 4, 00:15:50.043 "base_bdevs_list": [ 00:15:50.043 { 00:15:50.043 "name": null, 00:15:50.043 "uuid": "daa3cd55-09be-472e-b165-6ce3f59b7c14", 00:15:50.043 "is_configured": false, 00:15:50.043 "data_offset": 0, 00:15:50.043 "data_size": 65536 00:15:50.043 }, 00:15:50.043 { 00:15:50.043 "name": null, 00:15:50.043 "uuid": "c77a1136-366e-4495-a778-367fcd9b670f", 00:15:50.043 "is_configured": false, 00:15:50.043 "data_offset": 0, 00:15:50.043 "data_size": 65536 00:15:50.043 }, 00:15:50.043 { 00:15:50.043 "name": "BaseBdev3", 00:15:50.043 "uuid": "55114b23-0b1e-4330-8d07-92474cab080f", 00:15:50.043 "is_configured": true, 00:15:50.043 "data_offset": 0, 00:15:50.043 "data_size": 65536 00:15:50.043 }, 00:15:50.043 { 00:15:50.043 "name": "BaseBdev4", 00:15:50.043 "uuid": "6961865f-a76f-4e80-a435-3bff7dbd4530", 00:15:50.043 "is_configured": true, 00:15:50.043 "data_offset": 0, 00:15:50.043 "data_size": 65536 00:15:50.043 } 00:15:50.043 ] 00:15:50.043 }' 00:15:50.043 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.043 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.612 [2024-11-06 09:08:26.909601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.612 "name": "Existed_Raid", 00:15:50.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.612 "strip_size_kb": 64, 00:15:50.612 "state": "configuring", 00:15:50.612 "raid_level": "raid0", 00:15:50.612 "superblock": false, 00:15:50.612 "num_base_bdevs": 4, 00:15:50.612 "num_base_bdevs_discovered": 3, 00:15:50.612 "num_base_bdevs_operational": 4, 00:15:50.612 "base_bdevs_list": [ 00:15:50.612 { 00:15:50.612 "name": null, 00:15:50.612 "uuid": "daa3cd55-09be-472e-b165-6ce3f59b7c14", 00:15:50.612 "is_configured": false, 00:15:50.612 "data_offset": 0, 00:15:50.612 "data_size": 65536 00:15:50.612 }, 00:15:50.612 { 00:15:50.612 "name": "BaseBdev2", 00:15:50.612 "uuid": "c77a1136-366e-4495-a778-367fcd9b670f", 00:15:50.612 "is_configured": true, 00:15:50.612 "data_offset": 0, 00:15:50.612 "data_size": 65536 00:15:50.612 }, 00:15:50.612 { 00:15:50.612 "name": "BaseBdev3", 00:15:50.612 "uuid": "55114b23-0b1e-4330-8d07-92474cab080f", 00:15:50.612 "is_configured": true, 00:15:50.612 "data_offset": 0, 00:15:50.612 "data_size": 65536 00:15:50.612 }, 00:15:50.612 { 00:15:50.612 "name": "BaseBdev4", 00:15:50.612 "uuid": "6961865f-a76f-4e80-a435-3bff7dbd4530", 00:15:50.612 "is_configured": true, 00:15:50.612 "data_offset": 0, 00:15:50.612 "data_size": 65536 00:15:50.612 } 00:15:50.612 ] 00:15:50.612 }' 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.612 09:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.180 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.180 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:51.180 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.180 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.180 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.180 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:51.180 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.180 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:51.180 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u daa3cd55-09be-472e-b165-6ce3f59b7c14 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.181 [2024-11-06 09:08:27.547723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:51.181 [2024-11-06 09:08:27.548049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:51.181 [2024-11-06 09:08:27.548074] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:15:51.181 [2024-11-06 09:08:27.548406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:51.181 [2024-11-06 09:08:27.548595] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:51.181 [2024-11-06 09:08:27.548616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:51.181 [2024-11-06 09:08:27.548922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.181 NewBaseBdev 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.181 [ 00:15:51.181 { 00:15:51.181 "name": "NewBaseBdev", 00:15:51.181 "aliases": [ 00:15:51.181 "daa3cd55-09be-472e-b165-6ce3f59b7c14" 00:15:51.181 ], 00:15:51.181 "product_name": "Malloc disk", 00:15:51.181 "block_size": 512, 00:15:51.181 "num_blocks": 65536, 00:15:51.181 "uuid": "daa3cd55-09be-472e-b165-6ce3f59b7c14", 00:15:51.181 "assigned_rate_limits": { 00:15:51.181 "rw_ios_per_sec": 0, 00:15:51.181 "rw_mbytes_per_sec": 0, 00:15:51.181 "r_mbytes_per_sec": 0, 00:15:51.181 "w_mbytes_per_sec": 0 00:15:51.181 }, 00:15:51.181 "claimed": true, 00:15:51.181 "claim_type": "exclusive_write", 00:15:51.181 "zoned": false, 00:15:51.181 "supported_io_types": { 00:15:51.181 "read": true, 00:15:51.181 "write": true, 00:15:51.181 "unmap": true, 00:15:51.181 "flush": true, 00:15:51.181 "reset": true, 00:15:51.181 "nvme_admin": false, 00:15:51.181 "nvme_io": false, 00:15:51.181 "nvme_io_md": false, 00:15:51.181 "write_zeroes": true, 00:15:51.181 "zcopy": true, 00:15:51.181 "get_zone_info": false, 00:15:51.181 "zone_management": false, 00:15:51.181 "zone_append": false, 00:15:51.181 "compare": false, 00:15:51.181 "compare_and_write": false, 00:15:51.181 "abort": true, 00:15:51.181 "seek_hole": false, 00:15:51.181 "seek_data": false, 00:15:51.181 "copy": true, 00:15:51.181 "nvme_iov_md": false 00:15:51.181 }, 00:15:51.181 "memory_domains": [ 00:15:51.181 { 00:15:51.181 "dma_device_id": "system", 00:15:51.181 "dma_device_type": 1 00:15:51.181 }, 00:15:51.181 { 00:15:51.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.181 "dma_device_type": 2 00:15:51.181 } 00:15:51.181 ], 00:15:51.181 "driver_specific": {} 00:15:51.181 } 00:15:51.181 ] 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.181 "name": "Existed_Raid", 00:15:51.181 "uuid": "37ed585c-7f8e-4bf2-a0ad-b1eca9b56aa6", 00:15:51.181 "strip_size_kb": 64, 00:15:51.181 "state": "online", 00:15:51.181 "raid_level": "raid0", 00:15:51.181 "superblock": false, 00:15:51.181 "num_base_bdevs": 4, 00:15:51.181 "num_base_bdevs_discovered": 4, 00:15:51.181 "num_base_bdevs_operational": 4, 00:15:51.181 "base_bdevs_list": [ 00:15:51.181 { 00:15:51.181 "name": "NewBaseBdev", 00:15:51.181 "uuid": "daa3cd55-09be-472e-b165-6ce3f59b7c14", 00:15:51.181 "is_configured": true, 00:15:51.181 "data_offset": 0, 00:15:51.181 "data_size": 65536 00:15:51.181 }, 00:15:51.181 { 00:15:51.181 "name": "BaseBdev2", 00:15:51.181 "uuid": "c77a1136-366e-4495-a778-367fcd9b670f", 00:15:51.181 "is_configured": true, 00:15:51.181 "data_offset": 0, 00:15:51.181 "data_size": 65536 00:15:51.181 }, 00:15:51.181 { 00:15:51.181 "name": "BaseBdev3", 00:15:51.181 "uuid": "55114b23-0b1e-4330-8d07-92474cab080f", 00:15:51.181 "is_configured": true, 00:15:51.181 "data_offset": 0, 00:15:51.181 "data_size": 65536 00:15:51.181 }, 00:15:51.181 { 00:15:51.181 "name": "BaseBdev4", 00:15:51.181 "uuid": "6961865f-a76f-4e80-a435-3bff7dbd4530", 00:15:51.181 "is_configured": true, 00:15:51.181 "data_offset": 0, 00:15:51.181 "data_size": 65536 00:15:51.181 } 00:15:51.181 ] 00:15:51.181 }' 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.181 09:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.747 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:51.747 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:51.747 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:51.747 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:51.747 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:51.747 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:51.747 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:51.747 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:51.747 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.747 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.747 [2024-11-06 09:08:28.084390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.747 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.747 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:51.747 "name": "Existed_Raid", 00:15:51.747 "aliases": [ 00:15:51.747 "37ed585c-7f8e-4bf2-a0ad-b1eca9b56aa6" 00:15:51.747 ], 00:15:51.747 "product_name": "Raid Volume", 00:15:51.747 "block_size": 512, 00:15:51.747 "num_blocks": 262144, 00:15:51.747 "uuid": "37ed585c-7f8e-4bf2-a0ad-b1eca9b56aa6", 00:15:51.747 "assigned_rate_limits": { 00:15:51.747 "rw_ios_per_sec": 0, 00:15:51.747 "rw_mbytes_per_sec": 0, 00:15:51.747 "r_mbytes_per_sec": 0, 00:15:51.747 "w_mbytes_per_sec": 0 00:15:51.747 }, 00:15:51.747 "claimed": false, 00:15:51.747 "zoned": false, 00:15:51.747 "supported_io_types": { 00:15:51.747 "read": true, 00:15:51.747 "write": true, 00:15:51.747 "unmap": true, 00:15:51.747 "flush": true, 00:15:51.747 "reset": true, 00:15:51.747 "nvme_admin": false, 00:15:51.747 "nvme_io": false, 00:15:51.747 "nvme_io_md": false, 00:15:51.747 "write_zeroes": true, 00:15:51.747 "zcopy": false, 00:15:51.747 "get_zone_info": false, 00:15:51.747 "zone_management": false, 00:15:51.747 "zone_append": false, 00:15:51.747 "compare": false, 00:15:51.747 "compare_and_write": false, 00:15:51.747 "abort": false, 00:15:51.747 "seek_hole": false, 00:15:51.747 "seek_data": false, 00:15:51.747 "copy": false, 00:15:51.747 "nvme_iov_md": false 00:15:51.747 }, 00:15:51.747 "memory_domains": [ 00:15:51.747 { 00:15:51.747 "dma_device_id": "system", 00:15:51.747 "dma_device_type": 1 00:15:51.747 }, 00:15:51.747 { 00:15:51.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.747 "dma_device_type": 2 00:15:51.747 }, 00:15:51.747 { 00:15:51.747 "dma_device_id": "system", 00:15:51.747 "dma_device_type": 1 00:15:51.747 }, 00:15:51.747 { 00:15:51.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.747 "dma_device_type": 2 00:15:51.747 }, 00:15:51.747 { 00:15:51.747 "dma_device_id": "system", 00:15:51.747 "dma_device_type": 1 00:15:51.747 }, 00:15:51.747 { 00:15:51.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.747 "dma_device_type": 2 00:15:51.747 }, 00:15:51.747 { 00:15:51.747 "dma_device_id": "system", 00:15:51.747 "dma_device_type": 1 00:15:51.747 }, 00:15:51.747 { 00:15:51.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.747 "dma_device_type": 2 00:15:51.747 } 00:15:51.747 ], 00:15:51.747 "driver_specific": { 00:15:51.747 "raid": { 00:15:51.747 "uuid": "37ed585c-7f8e-4bf2-a0ad-b1eca9b56aa6", 00:15:51.747 "strip_size_kb": 64, 00:15:51.747 "state": "online", 00:15:51.747 "raid_level": "raid0", 00:15:51.747 "superblock": false, 00:15:51.747 "num_base_bdevs": 4, 00:15:51.747 "num_base_bdevs_discovered": 4, 00:15:51.747 "num_base_bdevs_operational": 4, 00:15:51.747 "base_bdevs_list": [ 00:15:51.747 { 00:15:51.747 "name": "NewBaseBdev", 00:15:51.747 "uuid": "daa3cd55-09be-472e-b165-6ce3f59b7c14", 00:15:51.747 "is_configured": true, 00:15:51.747 "data_offset": 0, 00:15:51.747 "data_size": 65536 00:15:51.747 }, 00:15:51.747 { 00:15:51.747 "name": "BaseBdev2", 00:15:51.747 "uuid": "c77a1136-366e-4495-a778-367fcd9b670f", 00:15:51.747 "is_configured": true, 00:15:51.747 "data_offset": 0, 00:15:51.747 "data_size": 65536 00:15:51.747 }, 00:15:51.747 { 00:15:51.747 "name": "BaseBdev3", 00:15:51.747 "uuid": "55114b23-0b1e-4330-8d07-92474cab080f", 00:15:51.747 "is_configured": true, 00:15:51.747 "data_offset": 0, 00:15:51.747 "data_size": 65536 00:15:51.748 }, 00:15:51.748 { 00:15:51.748 "name": "BaseBdev4", 00:15:51.748 "uuid": "6961865f-a76f-4e80-a435-3bff7dbd4530", 00:15:51.748 "is_configured": true, 00:15:51.748 "data_offset": 0, 00:15:51.748 "data_size": 65536 00:15:51.748 } 00:15:51.748 ] 00:15:51.748 } 00:15:51.748 } 00:15:51.748 }' 00:15:51.748 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:51.748 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:51.748 BaseBdev2 00:15:51.748 BaseBdev3 00:15:51.748 BaseBdev4' 00:15:51.748 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.748 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:51.748 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.748 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:51.748 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.748 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.748 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.748 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.006 [2024-11-06 09:08:28.472056] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.006 [2024-11-06 09:08:28.472094] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.006 [2024-11-06 09:08:28.472202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.006 [2024-11-06 09:08:28.472297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.006 [2024-11-06 09:08:28.472312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69390 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 69390 ']' 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 69390 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69390 00:15:52.006 killing process with pid 69390 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69390' 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 69390 00:15:52.006 [2024-11-06 09:08:28.511880] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.006 09:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 69390 00:15:52.572 [2024-11-06 09:08:28.869462] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:53.508 00:15:53.508 real 0m12.854s 00:15:53.508 user 0m21.380s 00:15:53.508 sys 0m1.770s 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:53.508 ************************************ 00:15:53.508 END TEST raid_state_function_test 00:15:53.508 ************************************ 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.508 09:08:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:15:53.508 09:08:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:53.508 09:08:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:53.508 09:08:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.508 ************************************ 00:15:53.508 START TEST raid_state_function_test_sb 00:15:53.508 ************************************ 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:53.508 Process raid pid: 70080 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70080 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70080' 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70080 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 70080 ']' 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:53.508 09:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.767 [2024-11-06 09:08:30.073515] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:15:53.768 [2024-11-06 09:08:30.073923] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.768 [2024-11-06 09:08:30.261312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.028 [2024-11-06 09:08:30.388633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.286 [2024-11-06 09:08:30.593536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.286 [2024-11-06 09:08:30.593589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.545 [2024-11-06 09:08:31.062635] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.545 [2024-11-06 09:08:31.062701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.545 [2024-11-06 09:08:31.062718] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.545 [2024-11-06 09:08:31.062735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.545 [2024-11-06 09:08:31.062745] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:54.545 [2024-11-06 09:08:31.062759] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:54.545 [2024-11-06 09:08:31.062769] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:54.545 [2024-11-06 09:08:31.062783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.545 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.804 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.804 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.804 "name": "Existed_Raid", 00:15:54.804 "uuid": "933ca2b3-4bd1-4f99-adca-983df8262de8", 00:15:54.804 "strip_size_kb": 64, 00:15:54.804 "state": "configuring", 00:15:54.804 "raid_level": "raid0", 00:15:54.804 "superblock": true, 00:15:54.804 "num_base_bdevs": 4, 00:15:54.804 "num_base_bdevs_discovered": 0, 00:15:54.804 "num_base_bdevs_operational": 4, 00:15:54.804 "base_bdevs_list": [ 00:15:54.804 { 00:15:54.804 "name": "BaseBdev1", 00:15:54.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.804 "is_configured": false, 00:15:54.804 "data_offset": 0, 00:15:54.804 "data_size": 0 00:15:54.804 }, 00:15:54.804 { 00:15:54.804 "name": "BaseBdev2", 00:15:54.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.804 "is_configured": false, 00:15:54.804 "data_offset": 0, 00:15:54.804 "data_size": 0 00:15:54.804 }, 00:15:54.804 { 00:15:54.804 "name": "BaseBdev3", 00:15:54.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.804 "is_configured": false, 00:15:54.804 "data_offset": 0, 00:15:54.804 "data_size": 0 00:15:54.804 }, 00:15:54.804 { 00:15:54.804 "name": "BaseBdev4", 00:15:54.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.804 "is_configured": false, 00:15:54.804 "data_offset": 0, 00:15:54.804 "data_size": 0 00:15:54.804 } 00:15:54.804 ] 00:15:54.804 }' 00:15:54.804 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.804 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.063 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:55.063 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.063 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.063 [2024-11-06 09:08:31.566698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.063 [2024-11-06 09:08:31.566891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:55.063 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.063 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:55.063 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.063 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.063 [2024-11-06 09:08:31.574696] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.063 [2024-11-06 09:08:31.574750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.063 [2024-11-06 09:08:31.574765] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.063 [2024-11-06 09:08:31.574782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.063 [2024-11-06 09:08:31.574791] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:55.063 [2024-11-06 09:08:31.574805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:55.063 [2024-11-06 09:08:31.574826] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:55.063 [2024-11-06 09:08:31.574843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:55.063 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.063 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:55.063 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.063 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.322 [2024-11-06 09:08:31.619702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.322 BaseBdev1 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.322 [ 00:15:55.322 { 00:15:55.322 "name": "BaseBdev1", 00:15:55.322 "aliases": [ 00:15:55.322 "a5015c42-be6d-4ef0-8743-45ffa8030e6a" 00:15:55.322 ], 00:15:55.322 "product_name": "Malloc disk", 00:15:55.322 "block_size": 512, 00:15:55.322 "num_blocks": 65536, 00:15:55.322 "uuid": "a5015c42-be6d-4ef0-8743-45ffa8030e6a", 00:15:55.322 "assigned_rate_limits": { 00:15:55.322 "rw_ios_per_sec": 0, 00:15:55.322 "rw_mbytes_per_sec": 0, 00:15:55.322 "r_mbytes_per_sec": 0, 00:15:55.322 "w_mbytes_per_sec": 0 00:15:55.322 }, 00:15:55.322 "claimed": true, 00:15:55.322 "claim_type": "exclusive_write", 00:15:55.322 "zoned": false, 00:15:55.322 "supported_io_types": { 00:15:55.322 "read": true, 00:15:55.322 "write": true, 00:15:55.322 "unmap": true, 00:15:55.322 "flush": true, 00:15:55.322 "reset": true, 00:15:55.322 "nvme_admin": false, 00:15:55.322 "nvme_io": false, 00:15:55.322 "nvme_io_md": false, 00:15:55.322 "write_zeroes": true, 00:15:55.322 "zcopy": true, 00:15:55.322 "get_zone_info": false, 00:15:55.322 "zone_management": false, 00:15:55.322 "zone_append": false, 00:15:55.322 "compare": false, 00:15:55.322 "compare_and_write": false, 00:15:55.322 "abort": true, 00:15:55.322 "seek_hole": false, 00:15:55.322 "seek_data": false, 00:15:55.322 "copy": true, 00:15:55.322 "nvme_iov_md": false 00:15:55.322 }, 00:15:55.322 "memory_domains": [ 00:15:55.322 { 00:15:55.322 "dma_device_id": "system", 00:15:55.322 "dma_device_type": 1 00:15:55.322 }, 00:15:55.322 { 00:15:55.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.322 "dma_device_type": 2 00:15:55.322 } 00:15:55.322 ], 00:15:55.322 "driver_specific": {} 00:15:55.322 } 00:15:55.322 ] 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.322 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.322 "name": "Existed_Raid", 00:15:55.322 "uuid": "648e9855-12b9-4efe-b604-23d067f716f7", 00:15:55.322 "strip_size_kb": 64, 00:15:55.322 "state": "configuring", 00:15:55.322 "raid_level": "raid0", 00:15:55.323 "superblock": true, 00:15:55.323 "num_base_bdevs": 4, 00:15:55.323 "num_base_bdevs_discovered": 1, 00:15:55.323 "num_base_bdevs_operational": 4, 00:15:55.323 "base_bdevs_list": [ 00:15:55.323 { 00:15:55.323 "name": "BaseBdev1", 00:15:55.323 "uuid": "a5015c42-be6d-4ef0-8743-45ffa8030e6a", 00:15:55.323 "is_configured": true, 00:15:55.323 "data_offset": 2048, 00:15:55.323 "data_size": 63488 00:15:55.323 }, 00:15:55.323 { 00:15:55.323 "name": "BaseBdev2", 00:15:55.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.323 "is_configured": false, 00:15:55.323 "data_offset": 0, 00:15:55.323 "data_size": 0 00:15:55.323 }, 00:15:55.323 { 00:15:55.323 "name": "BaseBdev3", 00:15:55.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.323 "is_configured": false, 00:15:55.323 "data_offset": 0, 00:15:55.323 "data_size": 0 00:15:55.323 }, 00:15:55.323 { 00:15:55.323 "name": "BaseBdev4", 00:15:55.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.323 "is_configured": false, 00:15:55.323 "data_offset": 0, 00:15:55.323 "data_size": 0 00:15:55.323 } 00:15:55.323 ] 00:15:55.323 }' 00:15:55.323 09:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.323 09:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.890 [2024-11-06 09:08:32.147960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.890 [2024-11-06 09:08:32.148150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.890 [2024-11-06 09:08:32.160024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.890 [2024-11-06 09:08:32.162624] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.890 [2024-11-06 09:08:32.162678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.890 [2024-11-06 09:08:32.162695] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:55.890 [2024-11-06 09:08:32.162722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:55.890 [2024-11-06 09:08:32.162732] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:55.890 [2024-11-06 09:08:32.162745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.890 "name": "Existed_Raid", 00:15:55.890 "uuid": "cbcc066d-38d4-4425-9e83-0087196bd9f4", 00:15:55.890 "strip_size_kb": 64, 00:15:55.890 "state": "configuring", 00:15:55.890 "raid_level": "raid0", 00:15:55.890 "superblock": true, 00:15:55.890 "num_base_bdevs": 4, 00:15:55.890 "num_base_bdevs_discovered": 1, 00:15:55.890 "num_base_bdevs_operational": 4, 00:15:55.890 "base_bdevs_list": [ 00:15:55.890 { 00:15:55.890 "name": "BaseBdev1", 00:15:55.890 "uuid": "a5015c42-be6d-4ef0-8743-45ffa8030e6a", 00:15:55.890 "is_configured": true, 00:15:55.890 "data_offset": 2048, 00:15:55.890 "data_size": 63488 00:15:55.890 }, 00:15:55.890 { 00:15:55.890 "name": "BaseBdev2", 00:15:55.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.890 "is_configured": false, 00:15:55.890 "data_offset": 0, 00:15:55.890 "data_size": 0 00:15:55.890 }, 00:15:55.890 { 00:15:55.890 "name": "BaseBdev3", 00:15:55.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.890 "is_configured": false, 00:15:55.890 "data_offset": 0, 00:15:55.890 "data_size": 0 00:15:55.890 }, 00:15:55.890 { 00:15:55.890 "name": "BaseBdev4", 00:15:55.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.890 "is_configured": false, 00:15:55.890 "data_offset": 0, 00:15:55.890 "data_size": 0 00:15:55.890 } 00:15:55.890 ] 00:15:55.890 }' 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.890 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.149 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:56.149 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.149 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.408 [2024-11-06 09:08:32.718522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.408 BaseBdev2 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.408 [ 00:15:56.408 { 00:15:56.408 "name": "BaseBdev2", 00:15:56.408 "aliases": [ 00:15:56.408 "a2ca3238-f9ef-40ec-812b-7994aec16dbf" 00:15:56.408 ], 00:15:56.408 "product_name": "Malloc disk", 00:15:56.408 "block_size": 512, 00:15:56.408 "num_blocks": 65536, 00:15:56.408 "uuid": "a2ca3238-f9ef-40ec-812b-7994aec16dbf", 00:15:56.408 "assigned_rate_limits": { 00:15:56.408 "rw_ios_per_sec": 0, 00:15:56.408 "rw_mbytes_per_sec": 0, 00:15:56.408 "r_mbytes_per_sec": 0, 00:15:56.408 "w_mbytes_per_sec": 0 00:15:56.408 }, 00:15:56.408 "claimed": true, 00:15:56.408 "claim_type": "exclusive_write", 00:15:56.408 "zoned": false, 00:15:56.408 "supported_io_types": { 00:15:56.408 "read": true, 00:15:56.408 "write": true, 00:15:56.408 "unmap": true, 00:15:56.408 "flush": true, 00:15:56.408 "reset": true, 00:15:56.408 "nvme_admin": false, 00:15:56.408 "nvme_io": false, 00:15:56.408 "nvme_io_md": false, 00:15:56.408 "write_zeroes": true, 00:15:56.408 "zcopy": true, 00:15:56.408 "get_zone_info": false, 00:15:56.408 "zone_management": false, 00:15:56.408 "zone_append": false, 00:15:56.408 "compare": false, 00:15:56.408 "compare_and_write": false, 00:15:56.408 "abort": true, 00:15:56.408 "seek_hole": false, 00:15:56.408 "seek_data": false, 00:15:56.408 "copy": true, 00:15:56.408 "nvme_iov_md": false 00:15:56.408 }, 00:15:56.408 "memory_domains": [ 00:15:56.408 { 00:15:56.408 "dma_device_id": "system", 00:15:56.408 "dma_device_type": 1 00:15:56.408 }, 00:15:56.408 { 00:15:56.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.408 "dma_device_type": 2 00:15:56.408 } 00:15:56.408 ], 00:15:56.408 "driver_specific": {} 00:15:56.408 } 00:15:56.408 ] 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.408 "name": "Existed_Raid", 00:15:56.408 "uuid": "cbcc066d-38d4-4425-9e83-0087196bd9f4", 00:15:56.408 "strip_size_kb": 64, 00:15:56.408 "state": "configuring", 00:15:56.408 "raid_level": "raid0", 00:15:56.408 "superblock": true, 00:15:56.408 "num_base_bdevs": 4, 00:15:56.408 "num_base_bdevs_discovered": 2, 00:15:56.408 "num_base_bdevs_operational": 4, 00:15:56.408 "base_bdevs_list": [ 00:15:56.408 { 00:15:56.408 "name": "BaseBdev1", 00:15:56.408 "uuid": "a5015c42-be6d-4ef0-8743-45ffa8030e6a", 00:15:56.408 "is_configured": true, 00:15:56.408 "data_offset": 2048, 00:15:56.408 "data_size": 63488 00:15:56.408 }, 00:15:56.408 { 00:15:56.408 "name": "BaseBdev2", 00:15:56.408 "uuid": "a2ca3238-f9ef-40ec-812b-7994aec16dbf", 00:15:56.408 "is_configured": true, 00:15:56.408 "data_offset": 2048, 00:15:56.408 "data_size": 63488 00:15:56.408 }, 00:15:56.408 { 00:15:56.408 "name": "BaseBdev3", 00:15:56.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.408 "is_configured": false, 00:15:56.408 "data_offset": 0, 00:15:56.408 "data_size": 0 00:15:56.408 }, 00:15:56.408 { 00:15:56.408 "name": "BaseBdev4", 00:15:56.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.408 "is_configured": false, 00:15:56.408 "data_offset": 0, 00:15:56.408 "data_size": 0 00:15:56.408 } 00:15:56.408 ] 00:15:56.408 }' 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.408 09:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.978 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:56.978 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.978 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.978 [2024-11-06 09:08:33.338427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.978 BaseBdev3 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.979 [ 00:15:56.979 { 00:15:56.979 "name": "BaseBdev3", 00:15:56.979 "aliases": [ 00:15:56.979 "3a6185d9-096f-48a7-ae74-1a823e5b5ce3" 00:15:56.979 ], 00:15:56.979 "product_name": "Malloc disk", 00:15:56.979 "block_size": 512, 00:15:56.979 "num_blocks": 65536, 00:15:56.979 "uuid": "3a6185d9-096f-48a7-ae74-1a823e5b5ce3", 00:15:56.979 "assigned_rate_limits": { 00:15:56.979 "rw_ios_per_sec": 0, 00:15:56.979 "rw_mbytes_per_sec": 0, 00:15:56.979 "r_mbytes_per_sec": 0, 00:15:56.979 "w_mbytes_per_sec": 0 00:15:56.979 }, 00:15:56.979 "claimed": true, 00:15:56.979 "claim_type": "exclusive_write", 00:15:56.979 "zoned": false, 00:15:56.979 "supported_io_types": { 00:15:56.979 "read": true, 00:15:56.979 "write": true, 00:15:56.979 "unmap": true, 00:15:56.979 "flush": true, 00:15:56.979 "reset": true, 00:15:56.979 "nvme_admin": false, 00:15:56.979 "nvme_io": false, 00:15:56.979 "nvme_io_md": false, 00:15:56.979 "write_zeroes": true, 00:15:56.979 "zcopy": true, 00:15:56.979 "get_zone_info": false, 00:15:56.979 "zone_management": false, 00:15:56.979 "zone_append": false, 00:15:56.979 "compare": false, 00:15:56.979 "compare_and_write": false, 00:15:56.979 "abort": true, 00:15:56.979 "seek_hole": false, 00:15:56.979 "seek_data": false, 00:15:56.979 "copy": true, 00:15:56.979 "nvme_iov_md": false 00:15:56.979 }, 00:15:56.979 "memory_domains": [ 00:15:56.979 { 00:15:56.979 "dma_device_id": "system", 00:15:56.979 "dma_device_type": 1 00:15:56.979 }, 00:15:56.979 { 00:15:56.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.979 "dma_device_type": 2 00:15:56.979 } 00:15:56.979 ], 00:15:56.979 "driver_specific": {} 00:15:56.979 } 00:15:56.979 ] 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.979 "name": "Existed_Raid", 00:15:56.979 "uuid": "cbcc066d-38d4-4425-9e83-0087196bd9f4", 00:15:56.979 "strip_size_kb": 64, 00:15:56.979 "state": "configuring", 00:15:56.979 "raid_level": "raid0", 00:15:56.979 "superblock": true, 00:15:56.979 "num_base_bdevs": 4, 00:15:56.979 "num_base_bdevs_discovered": 3, 00:15:56.979 "num_base_bdevs_operational": 4, 00:15:56.979 "base_bdevs_list": [ 00:15:56.979 { 00:15:56.979 "name": "BaseBdev1", 00:15:56.979 "uuid": "a5015c42-be6d-4ef0-8743-45ffa8030e6a", 00:15:56.979 "is_configured": true, 00:15:56.979 "data_offset": 2048, 00:15:56.979 "data_size": 63488 00:15:56.979 }, 00:15:56.979 { 00:15:56.979 "name": "BaseBdev2", 00:15:56.979 "uuid": "a2ca3238-f9ef-40ec-812b-7994aec16dbf", 00:15:56.979 "is_configured": true, 00:15:56.979 "data_offset": 2048, 00:15:56.979 "data_size": 63488 00:15:56.979 }, 00:15:56.979 { 00:15:56.979 "name": "BaseBdev3", 00:15:56.979 "uuid": "3a6185d9-096f-48a7-ae74-1a823e5b5ce3", 00:15:56.979 "is_configured": true, 00:15:56.979 "data_offset": 2048, 00:15:56.979 "data_size": 63488 00:15:56.979 }, 00:15:56.979 { 00:15:56.979 "name": "BaseBdev4", 00:15:56.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.979 "is_configured": false, 00:15:56.979 "data_offset": 0, 00:15:56.979 "data_size": 0 00:15:56.979 } 00:15:56.979 ] 00:15:56.979 }' 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.979 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.574 [2024-11-06 09:08:33.938077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:57.574 [2024-11-06 09:08:33.938446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:57.574 [2024-11-06 09:08:33.938466] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:57.574 [2024-11-06 09:08:33.938820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:57.574 BaseBdev4 00:15:57.574 [2024-11-06 09:08:33.939046] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:57.574 [2024-11-06 09:08:33.939070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:57.574 [2024-11-06 09:08:33.939256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.574 [ 00:15:57.574 { 00:15:57.574 "name": "BaseBdev4", 00:15:57.574 "aliases": [ 00:15:57.574 "677bae7f-e0f2-4f1f-aa77-cd8bce8d6650" 00:15:57.574 ], 00:15:57.574 "product_name": "Malloc disk", 00:15:57.574 "block_size": 512, 00:15:57.574 "num_blocks": 65536, 00:15:57.574 "uuid": "677bae7f-e0f2-4f1f-aa77-cd8bce8d6650", 00:15:57.574 "assigned_rate_limits": { 00:15:57.574 "rw_ios_per_sec": 0, 00:15:57.574 "rw_mbytes_per_sec": 0, 00:15:57.574 "r_mbytes_per_sec": 0, 00:15:57.574 "w_mbytes_per_sec": 0 00:15:57.574 }, 00:15:57.574 "claimed": true, 00:15:57.574 "claim_type": "exclusive_write", 00:15:57.574 "zoned": false, 00:15:57.574 "supported_io_types": { 00:15:57.574 "read": true, 00:15:57.574 "write": true, 00:15:57.574 "unmap": true, 00:15:57.574 "flush": true, 00:15:57.574 "reset": true, 00:15:57.574 "nvme_admin": false, 00:15:57.574 "nvme_io": false, 00:15:57.574 "nvme_io_md": false, 00:15:57.574 "write_zeroes": true, 00:15:57.574 "zcopy": true, 00:15:57.574 "get_zone_info": false, 00:15:57.574 "zone_management": false, 00:15:57.574 "zone_append": false, 00:15:57.574 "compare": false, 00:15:57.574 "compare_and_write": false, 00:15:57.574 "abort": true, 00:15:57.574 "seek_hole": false, 00:15:57.574 "seek_data": false, 00:15:57.574 "copy": true, 00:15:57.574 "nvme_iov_md": false 00:15:57.574 }, 00:15:57.574 "memory_domains": [ 00:15:57.574 { 00:15:57.574 "dma_device_id": "system", 00:15:57.574 "dma_device_type": 1 00:15:57.574 }, 00:15:57.574 { 00:15:57.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.574 "dma_device_type": 2 00:15:57.574 } 00:15:57.574 ], 00:15:57.574 "driver_specific": {} 00:15:57.574 } 00:15:57.574 ] 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.574 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.575 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.575 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.575 09:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.575 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.575 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.575 09:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.575 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.575 "name": "Existed_Raid", 00:15:57.575 "uuid": "cbcc066d-38d4-4425-9e83-0087196bd9f4", 00:15:57.575 "strip_size_kb": 64, 00:15:57.575 "state": "online", 00:15:57.575 "raid_level": "raid0", 00:15:57.575 "superblock": true, 00:15:57.575 "num_base_bdevs": 4, 00:15:57.575 "num_base_bdevs_discovered": 4, 00:15:57.575 "num_base_bdevs_operational": 4, 00:15:57.575 "base_bdevs_list": [ 00:15:57.575 { 00:15:57.575 "name": "BaseBdev1", 00:15:57.575 "uuid": "a5015c42-be6d-4ef0-8743-45ffa8030e6a", 00:15:57.575 "is_configured": true, 00:15:57.575 "data_offset": 2048, 00:15:57.575 "data_size": 63488 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "name": "BaseBdev2", 00:15:57.575 "uuid": "a2ca3238-f9ef-40ec-812b-7994aec16dbf", 00:15:57.575 "is_configured": true, 00:15:57.575 "data_offset": 2048, 00:15:57.575 "data_size": 63488 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "name": "BaseBdev3", 00:15:57.575 "uuid": "3a6185d9-096f-48a7-ae74-1a823e5b5ce3", 00:15:57.575 "is_configured": true, 00:15:57.575 "data_offset": 2048, 00:15:57.575 "data_size": 63488 00:15:57.575 }, 00:15:57.575 { 00:15:57.575 "name": "BaseBdev4", 00:15:57.575 "uuid": "677bae7f-e0f2-4f1f-aa77-cd8bce8d6650", 00:15:57.575 "is_configured": true, 00:15:57.575 "data_offset": 2048, 00:15:57.575 "data_size": 63488 00:15:57.575 } 00:15:57.575 ] 00:15:57.575 }' 00:15:57.575 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.575 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.143 [2024-11-06 09:08:34.498719] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:58.143 "name": "Existed_Raid", 00:15:58.143 "aliases": [ 00:15:58.143 "cbcc066d-38d4-4425-9e83-0087196bd9f4" 00:15:58.143 ], 00:15:58.143 "product_name": "Raid Volume", 00:15:58.143 "block_size": 512, 00:15:58.143 "num_blocks": 253952, 00:15:58.143 "uuid": "cbcc066d-38d4-4425-9e83-0087196bd9f4", 00:15:58.143 "assigned_rate_limits": { 00:15:58.143 "rw_ios_per_sec": 0, 00:15:58.143 "rw_mbytes_per_sec": 0, 00:15:58.143 "r_mbytes_per_sec": 0, 00:15:58.143 "w_mbytes_per_sec": 0 00:15:58.143 }, 00:15:58.143 "claimed": false, 00:15:58.143 "zoned": false, 00:15:58.143 "supported_io_types": { 00:15:58.143 "read": true, 00:15:58.143 "write": true, 00:15:58.143 "unmap": true, 00:15:58.143 "flush": true, 00:15:58.143 "reset": true, 00:15:58.143 "nvme_admin": false, 00:15:58.143 "nvme_io": false, 00:15:58.143 "nvme_io_md": false, 00:15:58.143 "write_zeroes": true, 00:15:58.143 "zcopy": false, 00:15:58.143 "get_zone_info": false, 00:15:58.143 "zone_management": false, 00:15:58.143 "zone_append": false, 00:15:58.143 "compare": false, 00:15:58.143 "compare_and_write": false, 00:15:58.143 "abort": false, 00:15:58.143 "seek_hole": false, 00:15:58.143 "seek_data": false, 00:15:58.143 "copy": false, 00:15:58.143 "nvme_iov_md": false 00:15:58.143 }, 00:15:58.143 "memory_domains": [ 00:15:58.143 { 00:15:58.143 "dma_device_id": "system", 00:15:58.143 "dma_device_type": 1 00:15:58.143 }, 00:15:58.143 { 00:15:58.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.143 "dma_device_type": 2 00:15:58.143 }, 00:15:58.143 { 00:15:58.143 "dma_device_id": "system", 00:15:58.143 "dma_device_type": 1 00:15:58.143 }, 00:15:58.143 { 00:15:58.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.143 "dma_device_type": 2 00:15:58.143 }, 00:15:58.143 { 00:15:58.143 "dma_device_id": "system", 00:15:58.143 "dma_device_type": 1 00:15:58.143 }, 00:15:58.143 { 00:15:58.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.143 "dma_device_type": 2 00:15:58.143 }, 00:15:58.143 { 00:15:58.143 "dma_device_id": "system", 00:15:58.143 "dma_device_type": 1 00:15:58.143 }, 00:15:58.143 { 00:15:58.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.143 "dma_device_type": 2 00:15:58.143 } 00:15:58.143 ], 00:15:58.143 "driver_specific": { 00:15:58.143 "raid": { 00:15:58.143 "uuid": "cbcc066d-38d4-4425-9e83-0087196bd9f4", 00:15:58.143 "strip_size_kb": 64, 00:15:58.143 "state": "online", 00:15:58.143 "raid_level": "raid0", 00:15:58.143 "superblock": true, 00:15:58.143 "num_base_bdevs": 4, 00:15:58.143 "num_base_bdevs_discovered": 4, 00:15:58.143 "num_base_bdevs_operational": 4, 00:15:58.143 "base_bdevs_list": [ 00:15:58.143 { 00:15:58.143 "name": "BaseBdev1", 00:15:58.143 "uuid": "a5015c42-be6d-4ef0-8743-45ffa8030e6a", 00:15:58.143 "is_configured": true, 00:15:58.143 "data_offset": 2048, 00:15:58.143 "data_size": 63488 00:15:58.143 }, 00:15:58.143 { 00:15:58.143 "name": "BaseBdev2", 00:15:58.143 "uuid": "a2ca3238-f9ef-40ec-812b-7994aec16dbf", 00:15:58.143 "is_configured": true, 00:15:58.143 "data_offset": 2048, 00:15:58.143 "data_size": 63488 00:15:58.143 }, 00:15:58.143 { 00:15:58.143 "name": "BaseBdev3", 00:15:58.143 "uuid": "3a6185d9-096f-48a7-ae74-1a823e5b5ce3", 00:15:58.143 "is_configured": true, 00:15:58.143 "data_offset": 2048, 00:15:58.143 "data_size": 63488 00:15:58.143 }, 00:15:58.143 { 00:15:58.143 "name": "BaseBdev4", 00:15:58.143 "uuid": "677bae7f-e0f2-4f1f-aa77-cd8bce8d6650", 00:15:58.143 "is_configured": true, 00:15:58.143 "data_offset": 2048, 00:15:58.143 "data_size": 63488 00:15:58.143 } 00:15:58.143 ] 00:15:58.143 } 00:15:58.143 } 00:15:58.143 }' 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:58.143 BaseBdev2 00:15:58.143 BaseBdev3 00:15:58.143 BaseBdev4' 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.143 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.403 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.403 [2024-11-06 09:08:34.866501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.403 [2024-11-06 09:08:34.866573] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.403 [2024-11-06 09:08:34.866639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.662 09:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.662 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.662 "name": "Existed_Raid", 00:15:58.662 "uuid": "cbcc066d-38d4-4425-9e83-0087196bd9f4", 00:15:58.662 "strip_size_kb": 64, 00:15:58.662 "state": "offline", 00:15:58.662 "raid_level": "raid0", 00:15:58.662 "superblock": true, 00:15:58.662 "num_base_bdevs": 4, 00:15:58.662 "num_base_bdevs_discovered": 3, 00:15:58.662 "num_base_bdevs_operational": 3, 00:15:58.662 "base_bdevs_list": [ 00:15:58.662 { 00:15:58.662 "name": null, 00:15:58.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.662 "is_configured": false, 00:15:58.662 "data_offset": 0, 00:15:58.662 "data_size": 63488 00:15:58.662 }, 00:15:58.662 { 00:15:58.662 "name": "BaseBdev2", 00:15:58.662 "uuid": "a2ca3238-f9ef-40ec-812b-7994aec16dbf", 00:15:58.662 "is_configured": true, 00:15:58.662 "data_offset": 2048, 00:15:58.662 "data_size": 63488 00:15:58.662 }, 00:15:58.662 { 00:15:58.662 "name": "BaseBdev3", 00:15:58.662 "uuid": "3a6185d9-096f-48a7-ae74-1a823e5b5ce3", 00:15:58.662 "is_configured": true, 00:15:58.662 "data_offset": 2048, 00:15:58.662 "data_size": 63488 00:15:58.662 }, 00:15:58.662 { 00:15:58.662 "name": "BaseBdev4", 00:15:58.662 "uuid": "677bae7f-e0f2-4f1f-aa77-cd8bce8d6650", 00:15:58.662 "is_configured": true, 00:15:58.662 "data_offset": 2048, 00:15:58.662 "data_size": 63488 00:15:58.662 } 00:15:58.662 ] 00:15:58.662 }' 00:15:58.662 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.662 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.229 [2024-11-06 09:08:35.532730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.229 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.229 [2024-11-06 09:08:35.678142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.489 [2024-11-06 09:08:35.824923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:59.489 [2024-11-06 09:08:35.824991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:59.489 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.490 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.490 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.490 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:59.490 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:59.490 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:59.490 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:59.490 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.490 09:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:59.490 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.490 09:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.749 BaseBdev2 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.749 [ 00:15:59.749 { 00:15:59.749 "name": "BaseBdev2", 00:15:59.749 "aliases": [ 00:15:59.749 "896ec1bb-43b6-4fe5-af12-49067f0705af" 00:15:59.749 ], 00:15:59.749 "product_name": "Malloc disk", 00:15:59.749 "block_size": 512, 00:15:59.749 "num_blocks": 65536, 00:15:59.749 "uuid": "896ec1bb-43b6-4fe5-af12-49067f0705af", 00:15:59.749 "assigned_rate_limits": { 00:15:59.749 "rw_ios_per_sec": 0, 00:15:59.749 "rw_mbytes_per_sec": 0, 00:15:59.749 "r_mbytes_per_sec": 0, 00:15:59.749 "w_mbytes_per_sec": 0 00:15:59.749 }, 00:15:59.749 "claimed": false, 00:15:59.749 "zoned": false, 00:15:59.749 "supported_io_types": { 00:15:59.749 "read": true, 00:15:59.749 "write": true, 00:15:59.749 "unmap": true, 00:15:59.749 "flush": true, 00:15:59.749 "reset": true, 00:15:59.749 "nvme_admin": false, 00:15:59.749 "nvme_io": false, 00:15:59.749 "nvme_io_md": false, 00:15:59.749 "write_zeroes": true, 00:15:59.749 "zcopy": true, 00:15:59.749 "get_zone_info": false, 00:15:59.749 "zone_management": false, 00:15:59.749 "zone_append": false, 00:15:59.749 "compare": false, 00:15:59.749 "compare_and_write": false, 00:15:59.749 "abort": true, 00:15:59.749 "seek_hole": false, 00:15:59.749 "seek_data": false, 00:15:59.749 "copy": true, 00:15:59.749 "nvme_iov_md": false 00:15:59.749 }, 00:15:59.749 "memory_domains": [ 00:15:59.749 { 00:15:59.749 "dma_device_id": "system", 00:15:59.749 "dma_device_type": 1 00:15:59.749 }, 00:15:59.749 { 00:15:59.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.749 "dma_device_type": 2 00:15:59.749 } 00:15:59.749 ], 00:15:59.749 "driver_specific": {} 00:15:59.749 } 00:15:59.749 ] 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.749 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.750 BaseBdev3 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.750 [ 00:15:59.750 { 00:15:59.750 "name": "BaseBdev3", 00:15:59.750 "aliases": [ 00:15:59.750 "315b7c8e-f836-4776-89cc-ecff2dc146b0" 00:15:59.750 ], 00:15:59.750 "product_name": "Malloc disk", 00:15:59.750 "block_size": 512, 00:15:59.750 "num_blocks": 65536, 00:15:59.750 "uuid": "315b7c8e-f836-4776-89cc-ecff2dc146b0", 00:15:59.750 "assigned_rate_limits": { 00:15:59.750 "rw_ios_per_sec": 0, 00:15:59.750 "rw_mbytes_per_sec": 0, 00:15:59.750 "r_mbytes_per_sec": 0, 00:15:59.750 "w_mbytes_per_sec": 0 00:15:59.750 }, 00:15:59.750 "claimed": false, 00:15:59.750 "zoned": false, 00:15:59.750 "supported_io_types": { 00:15:59.750 "read": true, 00:15:59.750 "write": true, 00:15:59.750 "unmap": true, 00:15:59.750 "flush": true, 00:15:59.750 "reset": true, 00:15:59.750 "nvme_admin": false, 00:15:59.750 "nvme_io": false, 00:15:59.750 "nvme_io_md": false, 00:15:59.750 "write_zeroes": true, 00:15:59.750 "zcopy": true, 00:15:59.750 "get_zone_info": false, 00:15:59.750 "zone_management": false, 00:15:59.750 "zone_append": false, 00:15:59.750 "compare": false, 00:15:59.750 "compare_and_write": false, 00:15:59.750 "abort": true, 00:15:59.750 "seek_hole": false, 00:15:59.750 "seek_data": false, 00:15:59.750 "copy": true, 00:15:59.750 "nvme_iov_md": false 00:15:59.750 }, 00:15:59.750 "memory_domains": [ 00:15:59.750 { 00:15:59.750 "dma_device_id": "system", 00:15:59.750 "dma_device_type": 1 00:15:59.750 }, 00:15:59.750 { 00:15:59.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.750 "dma_device_type": 2 00:15:59.750 } 00:15:59.750 ], 00:15:59.750 "driver_specific": {} 00:15:59.750 } 00:15:59.750 ] 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.750 BaseBdev4 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.750 [ 00:15:59.750 { 00:15:59.750 "name": "BaseBdev4", 00:15:59.750 "aliases": [ 00:15:59.750 "f2aa9729-a5d4-4395-bd06-4b8b43f9d9f3" 00:15:59.750 ], 00:15:59.750 "product_name": "Malloc disk", 00:15:59.750 "block_size": 512, 00:15:59.750 "num_blocks": 65536, 00:15:59.750 "uuid": "f2aa9729-a5d4-4395-bd06-4b8b43f9d9f3", 00:15:59.750 "assigned_rate_limits": { 00:15:59.750 "rw_ios_per_sec": 0, 00:15:59.750 "rw_mbytes_per_sec": 0, 00:15:59.750 "r_mbytes_per_sec": 0, 00:15:59.750 "w_mbytes_per_sec": 0 00:15:59.750 }, 00:15:59.750 "claimed": false, 00:15:59.750 "zoned": false, 00:15:59.750 "supported_io_types": { 00:15:59.750 "read": true, 00:15:59.750 "write": true, 00:15:59.750 "unmap": true, 00:15:59.750 "flush": true, 00:15:59.750 "reset": true, 00:15:59.750 "nvme_admin": false, 00:15:59.750 "nvme_io": false, 00:15:59.750 "nvme_io_md": false, 00:15:59.750 "write_zeroes": true, 00:15:59.750 "zcopy": true, 00:15:59.750 "get_zone_info": false, 00:15:59.750 "zone_management": false, 00:15:59.750 "zone_append": false, 00:15:59.750 "compare": false, 00:15:59.750 "compare_and_write": false, 00:15:59.750 "abort": true, 00:15:59.750 "seek_hole": false, 00:15:59.750 "seek_data": false, 00:15:59.750 "copy": true, 00:15:59.750 "nvme_iov_md": false 00:15:59.750 }, 00:15:59.750 "memory_domains": [ 00:15:59.750 { 00:15:59.750 "dma_device_id": "system", 00:15:59.750 "dma_device_type": 1 00:15:59.750 }, 00:15:59.750 { 00:15:59.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.750 "dma_device_type": 2 00:15:59.750 } 00:15:59.750 ], 00:15:59.750 "driver_specific": {} 00:15:59.750 } 00:15:59.750 ] 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.750 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.750 [2024-11-06 09:08:36.224116] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.750 [2024-11-06 09:08:36.224186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.751 [2024-11-06 09:08:36.224217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.751 [2024-11-06 09:08:36.226652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.751 [2024-11-06 09:08:36.226723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.751 "name": "Existed_Raid", 00:15:59.751 "uuid": "7fe85ff0-4f65-4f56-be01-4294b31b068b", 00:15:59.751 "strip_size_kb": 64, 00:15:59.751 "state": "configuring", 00:15:59.751 "raid_level": "raid0", 00:15:59.751 "superblock": true, 00:15:59.751 "num_base_bdevs": 4, 00:15:59.751 "num_base_bdevs_discovered": 3, 00:15:59.751 "num_base_bdevs_operational": 4, 00:15:59.751 "base_bdevs_list": [ 00:15:59.751 { 00:15:59.751 "name": "BaseBdev1", 00:15:59.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.751 "is_configured": false, 00:15:59.751 "data_offset": 0, 00:15:59.751 "data_size": 0 00:15:59.751 }, 00:15:59.751 { 00:15:59.751 "name": "BaseBdev2", 00:15:59.751 "uuid": "896ec1bb-43b6-4fe5-af12-49067f0705af", 00:15:59.751 "is_configured": true, 00:15:59.751 "data_offset": 2048, 00:15:59.751 "data_size": 63488 00:15:59.751 }, 00:15:59.751 { 00:15:59.751 "name": "BaseBdev3", 00:15:59.751 "uuid": "315b7c8e-f836-4776-89cc-ecff2dc146b0", 00:15:59.751 "is_configured": true, 00:15:59.751 "data_offset": 2048, 00:15:59.751 "data_size": 63488 00:15:59.751 }, 00:15:59.751 { 00:15:59.751 "name": "BaseBdev4", 00:15:59.751 "uuid": "f2aa9729-a5d4-4395-bd06-4b8b43f9d9f3", 00:15:59.751 "is_configured": true, 00:15:59.751 "data_offset": 2048, 00:15:59.751 "data_size": 63488 00:15:59.751 } 00:15:59.751 ] 00:15:59.751 }' 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.751 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.318 [2024-11-06 09:08:36.772322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.318 "name": "Existed_Raid", 00:16:00.318 "uuid": "7fe85ff0-4f65-4f56-be01-4294b31b068b", 00:16:00.318 "strip_size_kb": 64, 00:16:00.318 "state": "configuring", 00:16:00.318 "raid_level": "raid0", 00:16:00.318 "superblock": true, 00:16:00.318 "num_base_bdevs": 4, 00:16:00.318 "num_base_bdevs_discovered": 2, 00:16:00.318 "num_base_bdevs_operational": 4, 00:16:00.318 "base_bdevs_list": [ 00:16:00.318 { 00:16:00.318 "name": "BaseBdev1", 00:16:00.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.318 "is_configured": false, 00:16:00.318 "data_offset": 0, 00:16:00.318 "data_size": 0 00:16:00.318 }, 00:16:00.318 { 00:16:00.318 "name": null, 00:16:00.318 "uuid": "896ec1bb-43b6-4fe5-af12-49067f0705af", 00:16:00.318 "is_configured": false, 00:16:00.318 "data_offset": 0, 00:16:00.318 "data_size": 63488 00:16:00.318 }, 00:16:00.318 { 00:16:00.318 "name": "BaseBdev3", 00:16:00.318 "uuid": "315b7c8e-f836-4776-89cc-ecff2dc146b0", 00:16:00.318 "is_configured": true, 00:16:00.318 "data_offset": 2048, 00:16:00.318 "data_size": 63488 00:16:00.318 }, 00:16:00.318 { 00:16:00.318 "name": "BaseBdev4", 00:16:00.318 "uuid": "f2aa9729-a5d4-4395-bd06-4b8b43f9d9f3", 00:16:00.318 "is_configured": true, 00:16:00.318 "data_offset": 2048, 00:16:00.318 "data_size": 63488 00:16:00.318 } 00:16:00.318 ] 00:16:00.318 }' 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.318 09:08:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.887 [2024-11-06 09:08:37.371420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.887 BaseBdev1 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.887 [ 00:16:00.887 { 00:16:00.887 "name": "BaseBdev1", 00:16:00.887 "aliases": [ 00:16:00.887 "ad6f61ad-0804-4b89-9e87-05426f13219d" 00:16:00.887 ], 00:16:00.887 "product_name": "Malloc disk", 00:16:00.887 "block_size": 512, 00:16:00.887 "num_blocks": 65536, 00:16:00.887 "uuid": "ad6f61ad-0804-4b89-9e87-05426f13219d", 00:16:00.887 "assigned_rate_limits": { 00:16:00.887 "rw_ios_per_sec": 0, 00:16:00.887 "rw_mbytes_per_sec": 0, 00:16:00.887 "r_mbytes_per_sec": 0, 00:16:00.887 "w_mbytes_per_sec": 0 00:16:00.887 }, 00:16:00.887 "claimed": true, 00:16:00.887 "claim_type": "exclusive_write", 00:16:00.887 "zoned": false, 00:16:00.887 "supported_io_types": { 00:16:00.887 "read": true, 00:16:00.887 "write": true, 00:16:00.887 "unmap": true, 00:16:00.887 "flush": true, 00:16:00.887 "reset": true, 00:16:00.887 "nvme_admin": false, 00:16:00.887 "nvme_io": false, 00:16:00.887 "nvme_io_md": false, 00:16:00.887 "write_zeroes": true, 00:16:00.887 "zcopy": true, 00:16:00.887 "get_zone_info": false, 00:16:00.887 "zone_management": false, 00:16:00.887 "zone_append": false, 00:16:00.887 "compare": false, 00:16:00.887 "compare_and_write": false, 00:16:00.887 "abort": true, 00:16:00.887 "seek_hole": false, 00:16:00.887 "seek_data": false, 00:16:00.887 "copy": true, 00:16:00.887 "nvme_iov_md": false 00:16:00.887 }, 00:16:00.887 "memory_domains": [ 00:16:00.887 { 00:16:00.887 "dma_device_id": "system", 00:16:00.887 "dma_device_type": 1 00:16:00.887 }, 00:16:00.887 { 00:16:00.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.887 "dma_device_type": 2 00:16:00.887 } 00:16:00.887 ], 00:16:00.887 "driver_specific": {} 00:16:00.887 } 00:16:00.887 ] 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.887 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.146 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.146 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.146 "name": "Existed_Raid", 00:16:01.146 "uuid": "7fe85ff0-4f65-4f56-be01-4294b31b068b", 00:16:01.146 "strip_size_kb": 64, 00:16:01.146 "state": "configuring", 00:16:01.146 "raid_level": "raid0", 00:16:01.146 "superblock": true, 00:16:01.146 "num_base_bdevs": 4, 00:16:01.146 "num_base_bdevs_discovered": 3, 00:16:01.146 "num_base_bdevs_operational": 4, 00:16:01.146 "base_bdevs_list": [ 00:16:01.146 { 00:16:01.146 "name": "BaseBdev1", 00:16:01.146 "uuid": "ad6f61ad-0804-4b89-9e87-05426f13219d", 00:16:01.146 "is_configured": true, 00:16:01.146 "data_offset": 2048, 00:16:01.146 "data_size": 63488 00:16:01.146 }, 00:16:01.146 { 00:16:01.146 "name": null, 00:16:01.146 "uuid": "896ec1bb-43b6-4fe5-af12-49067f0705af", 00:16:01.146 "is_configured": false, 00:16:01.146 "data_offset": 0, 00:16:01.146 "data_size": 63488 00:16:01.146 }, 00:16:01.146 { 00:16:01.146 "name": "BaseBdev3", 00:16:01.146 "uuid": "315b7c8e-f836-4776-89cc-ecff2dc146b0", 00:16:01.146 "is_configured": true, 00:16:01.146 "data_offset": 2048, 00:16:01.146 "data_size": 63488 00:16:01.146 }, 00:16:01.146 { 00:16:01.146 "name": "BaseBdev4", 00:16:01.146 "uuid": "f2aa9729-a5d4-4395-bd06-4b8b43f9d9f3", 00:16:01.146 "is_configured": true, 00:16:01.146 "data_offset": 2048, 00:16:01.146 "data_size": 63488 00:16:01.146 } 00:16:01.146 ] 00:16:01.147 }' 00:16:01.147 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.147 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.715 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:01.715 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.715 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.715 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.715 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.715 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:01.715 09:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:01.715 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.715 09:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.715 [2024-11-06 09:08:38.007675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.715 "name": "Existed_Raid", 00:16:01.715 "uuid": "7fe85ff0-4f65-4f56-be01-4294b31b068b", 00:16:01.715 "strip_size_kb": 64, 00:16:01.715 "state": "configuring", 00:16:01.715 "raid_level": "raid0", 00:16:01.715 "superblock": true, 00:16:01.715 "num_base_bdevs": 4, 00:16:01.715 "num_base_bdevs_discovered": 2, 00:16:01.715 "num_base_bdevs_operational": 4, 00:16:01.715 "base_bdevs_list": [ 00:16:01.715 { 00:16:01.715 "name": "BaseBdev1", 00:16:01.715 "uuid": "ad6f61ad-0804-4b89-9e87-05426f13219d", 00:16:01.715 "is_configured": true, 00:16:01.715 "data_offset": 2048, 00:16:01.715 "data_size": 63488 00:16:01.715 }, 00:16:01.715 { 00:16:01.715 "name": null, 00:16:01.715 "uuid": "896ec1bb-43b6-4fe5-af12-49067f0705af", 00:16:01.715 "is_configured": false, 00:16:01.715 "data_offset": 0, 00:16:01.715 "data_size": 63488 00:16:01.715 }, 00:16:01.715 { 00:16:01.715 "name": null, 00:16:01.715 "uuid": "315b7c8e-f836-4776-89cc-ecff2dc146b0", 00:16:01.715 "is_configured": false, 00:16:01.715 "data_offset": 0, 00:16:01.715 "data_size": 63488 00:16:01.715 }, 00:16:01.715 { 00:16:01.715 "name": "BaseBdev4", 00:16:01.715 "uuid": "f2aa9729-a5d4-4395-bd06-4b8b43f9d9f3", 00:16:01.715 "is_configured": true, 00:16:01.715 "data_offset": 2048, 00:16:01.715 "data_size": 63488 00:16:01.715 } 00:16:01.715 ] 00:16:01.715 }' 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.715 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.283 [2024-11-06 09:08:38.607939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.283 "name": "Existed_Raid", 00:16:02.283 "uuid": "7fe85ff0-4f65-4f56-be01-4294b31b068b", 00:16:02.283 "strip_size_kb": 64, 00:16:02.283 "state": "configuring", 00:16:02.283 "raid_level": "raid0", 00:16:02.283 "superblock": true, 00:16:02.283 "num_base_bdevs": 4, 00:16:02.283 "num_base_bdevs_discovered": 3, 00:16:02.283 "num_base_bdevs_operational": 4, 00:16:02.283 "base_bdevs_list": [ 00:16:02.283 { 00:16:02.283 "name": "BaseBdev1", 00:16:02.283 "uuid": "ad6f61ad-0804-4b89-9e87-05426f13219d", 00:16:02.283 "is_configured": true, 00:16:02.283 "data_offset": 2048, 00:16:02.283 "data_size": 63488 00:16:02.283 }, 00:16:02.283 { 00:16:02.283 "name": null, 00:16:02.283 "uuid": "896ec1bb-43b6-4fe5-af12-49067f0705af", 00:16:02.283 "is_configured": false, 00:16:02.283 "data_offset": 0, 00:16:02.283 "data_size": 63488 00:16:02.283 }, 00:16:02.283 { 00:16:02.283 "name": "BaseBdev3", 00:16:02.283 "uuid": "315b7c8e-f836-4776-89cc-ecff2dc146b0", 00:16:02.283 "is_configured": true, 00:16:02.283 "data_offset": 2048, 00:16:02.283 "data_size": 63488 00:16:02.283 }, 00:16:02.283 { 00:16:02.283 "name": "BaseBdev4", 00:16:02.283 "uuid": "f2aa9729-a5d4-4395-bd06-4b8b43f9d9f3", 00:16:02.283 "is_configured": true, 00:16:02.283 "data_offset": 2048, 00:16:02.283 "data_size": 63488 00:16:02.283 } 00:16:02.283 ] 00:16:02.283 }' 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.283 09:08:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.850 [2024-11-06 09:08:39.228205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.850 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.851 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.851 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.851 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.851 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.851 "name": "Existed_Raid", 00:16:02.851 "uuid": "7fe85ff0-4f65-4f56-be01-4294b31b068b", 00:16:02.851 "strip_size_kb": 64, 00:16:02.851 "state": "configuring", 00:16:02.851 "raid_level": "raid0", 00:16:02.851 "superblock": true, 00:16:02.851 "num_base_bdevs": 4, 00:16:02.851 "num_base_bdevs_discovered": 2, 00:16:02.851 "num_base_bdevs_operational": 4, 00:16:02.851 "base_bdevs_list": [ 00:16:02.851 { 00:16:02.851 "name": null, 00:16:02.851 "uuid": "ad6f61ad-0804-4b89-9e87-05426f13219d", 00:16:02.851 "is_configured": false, 00:16:02.851 "data_offset": 0, 00:16:02.851 "data_size": 63488 00:16:02.851 }, 00:16:02.851 { 00:16:02.851 "name": null, 00:16:02.851 "uuid": "896ec1bb-43b6-4fe5-af12-49067f0705af", 00:16:02.851 "is_configured": false, 00:16:02.851 "data_offset": 0, 00:16:02.851 "data_size": 63488 00:16:02.851 }, 00:16:02.851 { 00:16:02.851 "name": "BaseBdev3", 00:16:02.851 "uuid": "315b7c8e-f836-4776-89cc-ecff2dc146b0", 00:16:02.851 "is_configured": true, 00:16:02.851 "data_offset": 2048, 00:16:02.851 "data_size": 63488 00:16:02.851 }, 00:16:02.851 { 00:16:02.851 "name": "BaseBdev4", 00:16:02.851 "uuid": "f2aa9729-a5d4-4395-bd06-4b8b43f9d9f3", 00:16:02.851 "is_configured": true, 00:16:02.851 "data_offset": 2048, 00:16:02.851 "data_size": 63488 00:16:02.851 } 00:16:02.851 ] 00:16:02.851 }' 00:16:02.851 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.851 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.418 [2024-11-06 09:08:39.902192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.418 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.676 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.676 "name": "Existed_Raid", 00:16:03.676 "uuid": "7fe85ff0-4f65-4f56-be01-4294b31b068b", 00:16:03.676 "strip_size_kb": 64, 00:16:03.676 "state": "configuring", 00:16:03.676 "raid_level": "raid0", 00:16:03.676 "superblock": true, 00:16:03.676 "num_base_bdevs": 4, 00:16:03.676 "num_base_bdevs_discovered": 3, 00:16:03.676 "num_base_bdevs_operational": 4, 00:16:03.676 "base_bdevs_list": [ 00:16:03.676 { 00:16:03.676 "name": null, 00:16:03.676 "uuid": "ad6f61ad-0804-4b89-9e87-05426f13219d", 00:16:03.676 "is_configured": false, 00:16:03.676 "data_offset": 0, 00:16:03.676 "data_size": 63488 00:16:03.676 }, 00:16:03.676 { 00:16:03.676 "name": "BaseBdev2", 00:16:03.676 "uuid": "896ec1bb-43b6-4fe5-af12-49067f0705af", 00:16:03.676 "is_configured": true, 00:16:03.676 "data_offset": 2048, 00:16:03.676 "data_size": 63488 00:16:03.676 }, 00:16:03.676 { 00:16:03.676 "name": "BaseBdev3", 00:16:03.676 "uuid": "315b7c8e-f836-4776-89cc-ecff2dc146b0", 00:16:03.676 "is_configured": true, 00:16:03.676 "data_offset": 2048, 00:16:03.676 "data_size": 63488 00:16:03.676 }, 00:16:03.676 { 00:16:03.676 "name": "BaseBdev4", 00:16:03.676 "uuid": "f2aa9729-a5d4-4395-bd06-4b8b43f9d9f3", 00:16:03.676 "is_configured": true, 00:16:03.676 "data_offset": 2048, 00:16:03.676 "data_size": 63488 00:16:03.676 } 00:16:03.676 ] 00:16:03.676 }' 00:16:03.676 09:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.676 09:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.935 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:03.935 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.935 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.935 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.935 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.193 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:04.193 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ad6f61ad-0804-4b89-9e87-05426f13219d 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.194 [2024-11-06 09:08:40.589417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:04.194 [2024-11-06 09:08:40.589678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:04.194 [2024-11-06 09:08:40.589694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:04.194 [2024-11-06 09:08:40.590065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:04.194 NewBaseBdev 00:16:04.194 [2024-11-06 09:08:40.590242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:04.194 [2024-11-06 09:08:40.590263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:04.194 [2024-11-06 09:08:40.590424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.194 [ 00:16:04.194 { 00:16:04.194 "name": "NewBaseBdev", 00:16:04.194 "aliases": [ 00:16:04.194 "ad6f61ad-0804-4b89-9e87-05426f13219d" 00:16:04.194 ], 00:16:04.194 "product_name": "Malloc disk", 00:16:04.194 "block_size": 512, 00:16:04.194 "num_blocks": 65536, 00:16:04.194 "uuid": "ad6f61ad-0804-4b89-9e87-05426f13219d", 00:16:04.194 "assigned_rate_limits": { 00:16:04.194 "rw_ios_per_sec": 0, 00:16:04.194 "rw_mbytes_per_sec": 0, 00:16:04.194 "r_mbytes_per_sec": 0, 00:16:04.194 "w_mbytes_per_sec": 0 00:16:04.194 }, 00:16:04.194 "claimed": true, 00:16:04.194 "claim_type": "exclusive_write", 00:16:04.194 "zoned": false, 00:16:04.194 "supported_io_types": { 00:16:04.194 "read": true, 00:16:04.194 "write": true, 00:16:04.194 "unmap": true, 00:16:04.194 "flush": true, 00:16:04.194 "reset": true, 00:16:04.194 "nvme_admin": false, 00:16:04.194 "nvme_io": false, 00:16:04.194 "nvme_io_md": false, 00:16:04.194 "write_zeroes": true, 00:16:04.194 "zcopy": true, 00:16:04.194 "get_zone_info": false, 00:16:04.194 "zone_management": false, 00:16:04.194 "zone_append": false, 00:16:04.194 "compare": false, 00:16:04.194 "compare_and_write": false, 00:16:04.194 "abort": true, 00:16:04.194 "seek_hole": false, 00:16:04.194 "seek_data": false, 00:16:04.194 "copy": true, 00:16:04.194 "nvme_iov_md": false 00:16:04.194 }, 00:16:04.194 "memory_domains": [ 00:16:04.194 { 00:16:04.194 "dma_device_id": "system", 00:16:04.194 "dma_device_type": 1 00:16:04.194 }, 00:16:04.194 { 00:16:04.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.194 "dma_device_type": 2 00:16:04.194 } 00:16:04.194 ], 00:16:04.194 "driver_specific": {} 00:16:04.194 } 00:16:04.194 ] 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.194 "name": "Existed_Raid", 00:16:04.194 "uuid": "7fe85ff0-4f65-4f56-be01-4294b31b068b", 00:16:04.194 "strip_size_kb": 64, 00:16:04.194 "state": "online", 00:16:04.194 "raid_level": "raid0", 00:16:04.194 "superblock": true, 00:16:04.194 "num_base_bdevs": 4, 00:16:04.194 "num_base_bdevs_discovered": 4, 00:16:04.194 "num_base_bdevs_operational": 4, 00:16:04.194 "base_bdevs_list": [ 00:16:04.194 { 00:16:04.194 "name": "NewBaseBdev", 00:16:04.194 "uuid": "ad6f61ad-0804-4b89-9e87-05426f13219d", 00:16:04.194 "is_configured": true, 00:16:04.194 "data_offset": 2048, 00:16:04.194 "data_size": 63488 00:16:04.194 }, 00:16:04.194 { 00:16:04.194 "name": "BaseBdev2", 00:16:04.194 "uuid": "896ec1bb-43b6-4fe5-af12-49067f0705af", 00:16:04.194 "is_configured": true, 00:16:04.194 "data_offset": 2048, 00:16:04.194 "data_size": 63488 00:16:04.194 }, 00:16:04.194 { 00:16:04.194 "name": "BaseBdev3", 00:16:04.194 "uuid": "315b7c8e-f836-4776-89cc-ecff2dc146b0", 00:16:04.194 "is_configured": true, 00:16:04.194 "data_offset": 2048, 00:16:04.194 "data_size": 63488 00:16:04.194 }, 00:16:04.194 { 00:16:04.194 "name": "BaseBdev4", 00:16:04.194 "uuid": "f2aa9729-a5d4-4395-bd06-4b8b43f9d9f3", 00:16:04.194 "is_configured": true, 00:16:04.194 "data_offset": 2048, 00:16:04.194 "data_size": 63488 00:16:04.194 } 00:16:04.194 ] 00:16:04.194 }' 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.194 09:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.762 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:04.762 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:04.762 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.762 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.762 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.762 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.762 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:04.762 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.762 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.762 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.763 [2024-11-06 09:08:41.186254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.763 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.763 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:04.763 "name": "Existed_Raid", 00:16:04.763 "aliases": [ 00:16:04.763 "7fe85ff0-4f65-4f56-be01-4294b31b068b" 00:16:04.763 ], 00:16:04.763 "product_name": "Raid Volume", 00:16:04.763 "block_size": 512, 00:16:04.763 "num_blocks": 253952, 00:16:04.763 "uuid": "7fe85ff0-4f65-4f56-be01-4294b31b068b", 00:16:04.763 "assigned_rate_limits": { 00:16:04.763 "rw_ios_per_sec": 0, 00:16:04.763 "rw_mbytes_per_sec": 0, 00:16:04.763 "r_mbytes_per_sec": 0, 00:16:04.763 "w_mbytes_per_sec": 0 00:16:04.763 }, 00:16:04.763 "claimed": false, 00:16:04.763 "zoned": false, 00:16:04.763 "supported_io_types": { 00:16:04.763 "read": true, 00:16:04.763 "write": true, 00:16:04.763 "unmap": true, 00:16:04.763 "flush": true, 00:16:04.763 "reset": true, 00:16:04.763 "nvme_admin": false, 00:16:04.763 "nvme_io": false, 00:16:04.763 "nvme_io_md": false, 00:16:04.763 "write_zeroes": true, 00:16:04.763 "zcopy": false, 00:16:04.763 "get_zone_info": false, 00:16:04.763 "zone_management": false, 00:16:04.763 "zone_append": false, 00:16:04.763 "compare": false, 00:16:04.763 "compare_and_write": false, 00:16:04.763 "abort": false, 00:16:04.763 "seek_hole": false, 00:16:04.763 "seek_data": false, 00:16:04.763 "copy": false, 00:16:04.763 "nvme_iov_md": false 00:16:04.763 }, 00:16:04.763 "memory_domains": [ 00:16:04.763 { 00:16:04.763 "dma_device_id": "system", 00:16:04.763 "dma_device_type": 1 00:16:04.763 }, 00:16:04.763 { 00:16:04.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.763 "dma_device_type": 2 00:16:04.763 }, 00:16:04.763 { 00:16:04.763 "dma_device_id": "system", 00:16:04.763 "dma_device_type": 1 00:16:04.763 }, 00:16:04.763 { 00:16:04.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.763 "dma_device_type": 2 00:16:04.763 }, 00:16:04.763 { 00:16:04.763 "dma_device_id": "system", 00:16:04.763 "dma_device_type": 1 00:16:04.763 }, 00:16:04.763 { 00:16:04.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.763 "dma_device_type": 2 00:16:04.763 }, 00:16:04.763 { 00:16:04.763 "dma_device_id": "system", 00:16:04.763 "dma_device_type": 1 00:16:04.763 }, 00:16:04.763 { 00:16:04.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.763 "dma_device_type": 2 00:16:04.763 } 00:16:04.763 ], 00:16:04.763 "driver_specific": { 00:16:04.763 "raid": { 00:16:04.763 "uuid": "7fe85ff0-4f65-4f56-be01-4294b31b068b", 00:16:04.763 "strip_size_kb": 64, 00:16:04.763 "state": "online", 00:16:04.763 "raid_level": "raid0", 00:16:04.763 "superblock": true, 00:16:04.763 "num_base_bdevs": 4, 00:16:04.763 "num_base_bdevs_discovered": 4, 00:16:04.763 "num_base_bdevs_operational": 4, 00:16:04.763 "base_bdevs_list": [ 00:16:04.763 { 00:16:04.763 "name": "NewBaseBdev", 00:16:04.763 "uuid": "ad6f61ad-0804-4b89-9e87-05426f13219d", 00:16:04.763 "is_configured": true, 00:16:04.763 "data_offset": 2048, 00:16:04.763 "data_size": 63488 00:16:04.763 }, 00:16:04.763 { 00:16:04.763 "name": "BaseBdev2", 00:16:04.763 "uuid": "896ec1bb-43b6-4fe5-af12-49067f0705af", 00:16:04.763 "is_configured": true, 00:16:04.763 "data_offset": 2048, 00:16:04.763 "data_size": 63488 00:16:04.763 }, 00:16:04.763 { 00:16:04.763 "name": "BaseBdev3", 00:16:04.763 "uuid": "315b7c8e-f836-4776-89cc-ecff2dc146b0", 00:16:04.763 "is_configured": true, 00:16:04.763 "data_offset": 2048, 00:16:04.763 "data_size": 63488 00:16:04.763 }, 00:16:04.763 { 00:16:04.763 "name": "BaseBdev4", 00:16:04.763 "uuid": "f2aa9729-a5d4-4395-bd06-4b8b43f9d9f3", 00:16:04.763 "is_configured": true, 00:16:04.763 "data_offset": 2048, 00:16:04.763 "data_size": 63488 00:16:04.763 } 00:16:04.763 ] 00:16:04.763 } 00:16:04.763 } 00:16:04.763 }' 00:16:04.763 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.763 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:04.763 BaseBdev2 00:16:04.763 BaseBdev3 00:16:04.763 BaseBdev4' 00:16:04.763 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.022 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.282 [2024-11-06 09:08:41.557841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:05.282 [2024-11-06 09:08:41.557886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.282 [2024-11-06 09:08:41.558046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.282 [2024-11-06 09:08:41.558164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.282 [2024-11-06 09:08:41.558185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:05.282 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.282 09:08:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70080 00:16:05.282 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 70080 ']' 00:16:05.282 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 70080 00:16:05.282 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:05.282 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:05.282 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70080 00:16:05.282 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:05.282 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:05.282 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70080' 00:16:05.282 killing process with pid 70080 00:16:05.282 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 70080 00:16:05.282 [2024-11-06 09:08:41.597612] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.282 09:08:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 70080 00:16:05.541 [2024-11-06 09:08:41.949410] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.477 09:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:06.477 00:16:06.477 real 0m13.051s 00:16:06.477 user 0m21.670s 00:16:06.477 sys 0m1.810s 00:16:06.477 09:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:06.477 09:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.477 ************************************ 00:16:06.477 END TEST raid_state_function_test_sb 00:16:06.477 ************************************ 00:16:06.736 09:08:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:16:06.736 09:08:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:06.736 09:08:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:06.736 09:08:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:06.736 ************************************ 00:16:06.736 START TEST raid_superblock_test 00:16:06.736 ************************************ 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70760 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70760 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 70760 ']' 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:06.736 09:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.736 [2024-11-06 09:08:43.172193] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:16:06.736 [2024-11-06 09:08:43.173336] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70760 ] 00:16:06.995 [2024-11-06 09:08:43.363999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.995 [2024-11-06 09:08:43.507107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.255 [2024-11-06 09:08:43.711215] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.255 [2024-11-06 09:08:43.711277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.837 malloc1 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.837 [2024-11-06 09:08:44.277103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.837 [2024-11-06 09:08:44.277188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.837 [2024-11-06 09:08:44.277240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:07.837 [2024-11-06 09:08:44.277262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.837 [2024-11-06 09:08:44.280158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.837 [2024-11-06 09:08:44.280201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.837 pt1 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.837 malloc2 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.837 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.838 [2024-11-06 09:08:44.325160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:07.838 [2024-11-06 09:08:44.325225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.838 [2024-11-06 09:08:44.325257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:07.838 [2024-11-06 09:08:44.325273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.838 [2024-11-06 09:08:44.328109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.838 [2024-11-06 09:08:44.328153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:07.838 pt2 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.838 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.096 malloc3 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.096 [2024-11-06 09:08:44.394255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:08.096 [2024-11-06 09:08:44.394329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.096 [2024-11-06 09:08:44.394361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:08.096 [2024-11-06 09:08:44.394376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.096 [2024-11-06 09:08:44.397363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.096 [2024-11-06 09:08:44.397405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:08.096 pt3 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.096 malloc4 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.096 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.096 [2024-11-06 09:08:44.452020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:08.096 [2024-11-06 09:08:44.452083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.096 [2024-11-06 09:08:44.452112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:08.096 [2024-11-06 09:08:44.452127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.096 [2024-11-06 09:08:44.454983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.097 [2024-11-06 09:08:44.455028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:08.097 pt4 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.097 [2024-11-06 09:08:44.464058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:08.097 [2024-11-06 09:08:44.466428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:08.097 [2024-11-06 09:08:44.466567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:08.097 [2024-11-06 09:08:44.466666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:08.097 [2024-11-06 09:08:44.466966] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:08.097 [2024-11-06 09:08:44.467003] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:08.097 [2024-11-06 09:08:44.467349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:08.097 [2024-11-06 09:08:44.467581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:08.097 [2024-11-06 09:08:44.467603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:08.097 [2024-11-06 09:08:44.467809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.097 "name": "raid_bdev1", 00:16:08.097 "uuid": "f7173361-0fd4-4381-878a-f3af421b98be", 00:16:08.097 "strip_size_kb": 64, 00:16:08.097 "state": "online", 00:16:08.097 "raid_level": "raid0", 00:16:08.097 "superblock": true, 00:16:08.097 "num_base_bdevs": 4, 00:16:08.097 "num_base_bdevs_discovered": 4, 00:16:08.097 "num_base_bdevs_operational": 4, 00:16:08.097 "base_bdevs_list": [ 00:16:08.097 { 00:16:08.097 "name": "pt1", 00:16:08.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.097 "is_configured": true, 00:16:08.097 "data_offset": 2048, 00:16:08.097 "data_size": 63488 00:16:08.097 }, 00:16:08.097 { 00:16:08.097 "name": "pt2", 00:16:08.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.097 "is_configured": true, 00:16:08.097 "data_offset": 2048, 00:16:08.097 "data_size": 63488 00:16:08.097 }, 00:16:08.097 { 00:16:08.097 "name": "pt3", 00:16:08.097 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:08.097 "is_configured": true, 00:16:08.097 "data_offset": 2048, 00:16:08.097 "data_size": 63488 00:16:08.097 }, 00:16:08.097 { 00:16:08.097 "name": "pt4", 00:16:08.097 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:08.097 "is_configured": true, 00:16:08.097 "data_offset": 2048, 00:16:08.097 "data_size": 63488 00:16:08.097 } 00:16:08.097 ] 00:16:08.097 }' 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.097 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.663 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:08.663 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:08.663 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:08.663 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:08.663 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:08.663 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:08.663 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.663 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.663 09:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.663 09:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:08.663 [2024-11-06 09:08:44.992585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.663 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.663 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:08.663 "name": "raid_bdev1", 00:16:08.663 "aliases": [ 00:16:08.663 "f7173361-0fd4-4381-878a-f3af421b98be" 00:16:08.663 ], 00:16:08.663 "product_name": "Raid Volume", 00:16:08.663 "block_size": 512, 00:16:08.663 "num_blocks": 253952, 00:16:08.663 "uuid": "f7173361-0fd4-4381-878a-f3af421b98be", 00:16:08.663 "assigned_rate_limits": { 00:16:08.663 "rw_ios_per_sec": 0, 00:16:08.663 "rw_mbytes_per_sec": 0, 00:16:08.663 "r_mbytes_per_sec": 0, 00:16:08.663 "w_mbytes_per_sec": 0 00:16:08.663 }, 00:16:08.663 "claimed": false, 00:16:08.663 "zoned": false, 00:16:08.663 "supported_io_types": { 00:16:08.663 "read": true, 00:16:08.663 "write": true, 00:16:08.663 "unmap": true, 00:16:08.663 "flush": true, 00:16:08.663 "reset": true, 00:16:08.663 "nvme_admin": false, 00:16:08.663 "nvme_io": false, 00:16:08.663 "nvme_io_md": false, 00:16:08.663 "write_zeroes": true, 00:16:08.663 "zcopy": false, 00:16:08.663 "get_zone_info": false, 00:16:08.663 "zone_management": false, 00:16:08.663 "zone_append": false, 00:16:08.663 "compare": false, 00:16:08.663 "compare_and_write": false, 00:16:08.663 "abort": false, 00:16:08.663 "seek_hole": false, 00:16:08.663 "seek_data": false, 00:16:08.663 "copy": false, 00:16:08.663 "nvme_iov_md": false 00:16:08.663 }, 00:16:08.663 "memory_domains": [ 00:16:08.663 { 00:16:08.663 "dma_device_id": "system", 00:16:08.663 "dma_device_type": 1 00:16:08.663 }, 00:16:08.663 { 00:16:08.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.663 "dma_device_type": 2 00:16:08.663 }, 00:16:08.663 { 00:16:08.663 "dma_device_id": "system", 00:16:08.663 "dma_device_type": 1 00:16:08.663 }, 00:16:08.663 { 00:16:08.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.663 "dma_device_type": 2 00:16:08.663 }, 00:16:08.663 { 00:16:08.663 "dma_device_id": "system", 00:16:08.663 "dma_device_type": 1 00:16:08.663 }, 00:16:08.663 { 00:16:08.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.663 "dma_device_type": 2 00:16:08.663 }, 00:16:08.663 { 00:16:08.663 "dma_device_id": "system", 00:16:08.663 "dma_device_type": 1 00:16:08.663 }, 00:16:08.663 { 00:16:08.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.663 "dma_device_type": 2 00:16:08.663 } 00:16:08.663 ], 00:16:08.663 "driver_specific": { 00:16:08.663 "raid": { 00:16:08.663 "uuid": "f7173361-0fd4-4381-878a-f3af421b98be", 00:16:08.663 "strip_size_kb": 64, 00:16:08.663 "state": "online", 00:16:08.663 "raid_level": "raid0", 00:16:08.663 "superblock": true, 00:16:08.663 "num_base_bdevs": 4, 00:16:08.663 "num_base_bdevs_discovered": 4, 00:16:08.663 "num_base_bdevs_operational": 4, 00:16:08.663 "base_bdevs_list": [ 00:16:08.663 { 00:16:08.663 "name": "pt1", 00:16:08.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:08.663 "is_configured": true, 00:16:08.663 "data_offset": 2048, 00:16:08.663 "data_size": 63488 00:16:08.663 }, 00:16:08.663 { 00:16:08.663 "name": "pt2", 00:16:08.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.663 "is_configured": true, 00:16:08.663 "data_offset": 2048, 00:16:08.663 "data_size": 63488 00:16:08.663 }, 00:16:08.664 { 00:16:08.664 "name": "pt3", 00:16:08.664 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:08.664 "is_configured": true, 00:16:08.664 "data_offset": 2048, 00:16:08.664 "data_size": 63488 00:16:08.664 }, 00:16:08.664 { 00:16:08.664 "name": "pt4", 00:16:08.664 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:08.664 "is_configured": true, 00:16:08.664 "data_offset": 2048, 00:16:08.664 "data_size": 63488 00:16:08.664 } 00:16:08.664 ] 00:16:08.664 } 00:16:08.664 } 00:16:08.664 }' 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:08.664 pt2 00:16:08.664 pt3 00:16:08.664 pt4' 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.664 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:08.922 [2024-11-06 09:08:45.344663] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f7173361-0fd4-4381-878a-f3af421b98be 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f7173361-0fd4-4381-878a-f3af421b98be ']' 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.922 [2024-11-06 09:08:45.408308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:08.922 [2024-11-06 09:08:45.408345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:08.922 [2024-11-06 09:08:45.408457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.922 [2024-11-06 09:08:45.408550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.922 [2024-11-06 09:08:45.408573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.922 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.181 [2024-11-06 09:08:45.564370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:09.181 [2024-11-06 09:08:45.566911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:09.181 [2024-11-06 09:08:45.566991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:09.181 [2024-11-06 09:08:45.567049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:09.181 [2024-11-06 09:08:45.567134] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:09.181 [2024-11-06 09:08:45.567206] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:09.181 [2024-11-06 09:08:45.567241] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:09.181 [2024-11-06 09:08:45.567273] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:09.181 [2024-11-06 09:08:45.567295] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.181 [2024-11-06 09:08:45.567314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:09.181 request: 00:16:09.181 { 00:16:09.181 "name": "raid_bdev1", 00:16:09.181 "raid_level": "raid0", 00:16:09.181 "base_bdevs": [ 00:16:09.181 "malloc1", 00:16:09.181 "malloc2", 00:16:09.181 "malloc3", 00:16:09.181 "malloc4" 00:16:09.181 ], 00:16:09.181 "strip_size_kb": 64, 00:16:09.181 "superblock": false, 00:16:09.181 "method": "bdev_raid_create", 00:16:09.181 "req_id": 1 00:16:09.181 } 00:16:09.181 Got JSON-RPC error response 00:16:09.181 response: 00:16:09.181 { 00:16:09.181 "code": -17, 00:16:09.181 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:09.181 } 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.181 [2024-11-06 09:08:45.624357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:09.181 [2024-11-06 09:08:45.624437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.181 [2024-11-06 09:08:45.624462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:09.181 [2024-11-06 09:08:45.624480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.181 [2024-11-06 09:08:45.627406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.181 [2024-11-06 09:08:45.627457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:09.181 [2024-11-06 09:08:45.627568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:09.181 [2024-11-06 09:08:45.627650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:09.181 pt1 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.181 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.182 "name": "raid_bdev1", 00:16:09.182 "uuid": "f7173361-0fd4-4381-878a-f3af421b98be", 00:16:09.182 "strip_size_kb": 64, 00:16:09.182 "state": "configuring", 00:16:09.182 "raid_level": "raid0", 00:16:09.182 "superblock": true, 00:16:09.182 "num_base_bdevs": 4, 00:16:09.182 "num_base_bdevs_discovered": 1, 00:16:09.182 "num_base_bdevs_operational": 4, 00:16:09.182 "base_bdevs_list": [ 00:16:09.182 { 00:16:09.182 "name": "pt1", 00:16:09.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.182 "is_configured": true, 00:16:09.182 "data_offset": 2048, 00:16:09.182 "data_size": 63488 00:16:09.182 }, 00:16:09.182 { 00:16:09.182 "name": null, 00:16:09.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.182 "is_configured": false, 00:16:09.182 "data_offset": 2048, 00:16:09.182 "data_size": 63488 00:16:09.182 }, 00:16:09.182 { 00:16:09.182 "name": null, 00:16:09.182 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.182 "is_configured": false, 00:16:09.182 "data_offset": 2048, 00:16:09.182 "data_size": 63488 00:16:09.182 }, 00:16:09.182 { 00:16:09.182 "name": null, 00:16:09.182 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:09.182 "is_configured": false, 00:16:09.182 "data_offset": 2048, 00:16:09.182 "data_size": 63488 00:16:09.182 } 00:16:09.182 ] 00:16:09.182 }' 00:16:09.182 09:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.182 09:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.748 [2024-11-06 09:08:46.084497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:09.748 [2024-11-06 09:08:46.084601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.748 [2024-11-06 09:08:46.084631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:09.748 [2024-11-06 09:08:46.084649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.748 [2024-11-06 09:08:46.085216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.748 [2024-11-06 09:08:46.085255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:09.748 [2024-11-06 09:08:46.085354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:09.748 [2024-11-06 09:08:46.085392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:09.748 pt2 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.748 [2024-11-06 09:08:46.092517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.748 "name": "raid_bdev1", 00:16:09.748 "uuid": "f7173361-0fd4-4381-878a-f3af421b98be", 00:16:09.748 "strip_size_kb": 64, 00:16:09.748 "state": "configuring", 00:16:09.748 "raid_level": "raid0", 00:16:09.748 "superblock": true, 00:16:09.748 "num_base_bdevs": 4, 00:16:09.748 "num_base_bdevs_discovered": 1, 00:16:09.748 "num_base_bdevs_operational": 4, 00:16:09.748 "base_bdevs_list": [ 00:16:09.748 { 00:16:09.748 "name": "pt1", 00:16:09.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:09.748 "is_configured": true, 00:16:09.748 "data_offset": 2048, 00:16:09.748 "data_size": 63488 00:16:09.748 }, 00:16:09.748 { 00:16:09.748 "name": null, 00:16:09.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:09.748 "is_configured": false, 00:16:09.748 "data_offset": 0, 00:16:09.748 "data_size": 63488 00:16:09.748 }, 00:16:09.748 { 00:16:09.748 "name": null, 00:16:09.748 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:09.748 "is_configured": false, 00:16:09.748 "data_offset": 2048, 00:16:09.748 "data_size": 63488 00:16:09.748 }, 00:16:09.748 { 00:16:09.748 "name": null, 00:16:09.748 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:09.748 "is_configured": false, 00:16:09.748 "data_offset": 2048, 00:16:09.748 "data_size": 63488 00:16:09.748 } 00:16:09.748 ] 00:16:09.748 }' 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.748 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.314 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:10.314 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:10.314 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:10.314 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.314 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.314 [2024-11-06 09:08:46.588611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:10.314 [2024-11-06 09:08:46.588684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.314 [2024-11-06 09:08:46.588714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:10.314 [2024-11-06 09:08:46.588730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.314 [2024-11-06 09:08:46.589289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.314 [2024-11-06 09:08:46.589323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:10.314 [2024-11-06 09:08:46.589428] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:10.314 [2024-11-06 09:08:46.589460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:10.314 pt2 00:16:10.314 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.314 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:10.314 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:10.314 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:10.314 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.314 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.314 [2024-11-06 09:08:46.596617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:10.314 [2024-11-06 09:08:46.596673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.314 [2024-11-06 09:08:46.596707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:10.314 [2024-11-06 09:08:46.596724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.314 [2024-11-06 09:08:46.597176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.314 [2024-11-06 09:08:46.597202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:10.315 [2024-11-06 09:08:46.597280] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:10.315 [2024-11-06 09:08:46.597308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:10.315 pt3 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.315 [2024-11-06 09:08:46.604568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:10.315 [2024-11-06 09:08:46.604638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.315 [2024-11-06 09:08:46.604666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:10.315 [2024-11-06 09:08:46.604678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.315 [2024-11-06 09:08:46.605176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.315 [2024-11-06 09:08:46.605218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:10.315 [2024-11-06 09:08:46.605301] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:10.315 [2024-11-06 09:08:46.605329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:10.315 [2024-11-06 09:08:46.605535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:10.315 [2024-11-06 09:08:46.605550] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:10.315 [2024-11-06 09:08:46.605893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:10.315 [2024-11-06 09:08:46.606095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:10.315 [2024-11-06 09:08:46.606118] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:10.315 [2024-11-06 09:08:46.606274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.315 pt4 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.315 "name": "raid_bdev1", 00:16:10.315 "uuid": "f7173361-0fd4-4381-878a-f3af421b98be", 00:16:10.315 "strip_size_kb": 64, 00:16:10.315 "state": "online", 00:16:10.315 "raid_level": "raid0", 00:16:10.315 "superblock": true, 00:16:10.315 "num_base_bdevs": 4, 00:16:10.315 "num_base_bdevs_discovered": 4, 00:16:10.315 "num_base_bdevs_operational": 4, 00:16:10.315 "base_bdevs_list": [ 00:16:10.315 { 00:16:10.315 "name": "pt1", 00:16:10.315 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:10.315 "is_configured": true, 00:16:10.315 "data_offset": 2048, 00:16:10.315 "data_size": 63488 00:16:10.315 }, 00:16:10.315 { 00:16:10.315 "name": "pt2", 00:16:10.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.315 "is_configured": true, 00:16:10.315 "data_offset": 2048, 00:16:10.315 "data_size": 63488 00:16:10.315 }, 00:16:10.315 { 00:16:10.315 "name": "pt3", 00:16:10.315 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.315 "is_configured": true, 00:16:10.315 "data_offset": 2048, 00:16:10.315 "data_size": 63488 00:16:10.315 }, 00:16:10.315 { 00:16:10.315 "name": "pt4", 00:16:10.315 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:10.315 "is_configured": true, 00:16:10.315 "data_offset": 2048, 00:16:10.315 "data_size": 63488 00:16:10.315 } 00:16:10.315 ] 00:16:10.315 }' 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.315 09:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.586 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:10.586 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:10.586 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:10.586 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:10.586 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:10.854 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:10.854 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:10.854 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:10.854 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.854 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.854 [2024-11-06 09:08:47.121223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.854 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.854 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:10.854 "name": "raid_bdev1", 00:16:10.854 "aliases": [ 00:16:10.854 "f7173361-0fd4-4381-878a-f3af421b98be" 00:16:10.854 ], 00:16:10.854 "product_name": "Raid Volume", 00:16:10.854 "block_size": 512, 00:16:10.854 "num_blocks": 253952, 00:16:10.854 "uuid": "f7173361-0fd4-4381-878a-f3af421b98be", 00:16:10.854 "assigned_rate_limits": { 00:16:10.854 "rw_ios_per_sec": 0, 00:16:10.854 "rw_mbytes_per_sec": 0, 00:16:10.854 "r_mbytes_per_sec": 0, 00:16:10.854 "w_mbytes_per_sec": 0 00:16:10.854 }, 00:16:10.854 "claimed": false, 00:16:10.854 "zoned": false, 00:16:10.854 "supported_io_types": { 00:16:10.854 "read": true, 00:16:10.854 "write": true, 00:16:10.854 "unmap": true, 00:16:10.854 "flush": true, 00:16:10.854 "reset": true, 00:16:10.854 "nvme_admin": false, 00:16:10.854 "nvme_io": false, 00:16:10.854 "nvme_io_md": false, 00:16:10.854 "write_zeroes": true, 00:16:10.854 "zcopy": false, 00:16:10.854 "get_zone_info": false, 00:16:10.854 "zone_management": false, 00:16:10.854 "zone_append": false, 00:16:10.854 "compare": false, 00:16:10.854 "compare_and_write": false, 00:16:10.854 "abort": false, 00:16:10.854 "seek_hole": false, 00:16:10.854 "seek_data": false, 00:16:10.854 "copy": false, 00:16:10.854 "nvme_iov_md": false 00:16:10.854 }, 00:16:10.854 "memory_domains": [ 00:16:10.854 { 00:16:10.854 "dma_device_id": "system", 00:16:10.854 "dma_device_type": 1 00:16:10.854 }, 00:16:10.854 { 00:16:10.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.854 "dma_device_type": 2 00:16:10.854 }, 00:16:10.854 { 00:16:10.855 "dma_device_id": "system", 00:16:10.855 "dma_device_type": 1 00:16:10.855 }, 00:16:10.855 { 00:16:10.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.855 "dma_device_type": 2 00:16:10.855 }, 00:16:10.855 { 00:16:10.855 "dma_device_id": "system", 00:16:10.855 "dma_device_type": 1 00:16:10.855 }, 00:16:10.855 { 00:16:10.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.855 "dma_device_type": 2 00:16:10.855 }, 00:16:10.855 { 00:16:10.855 "dma_device_id": "system", 00:16:10.855 "dma_device_type": 1 00:16:10.855 }, 00:16:10.855 { 00:16:10.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.855 "dma_device_type": 2 00:16:10.855 } 00:16:10.855 ], 00:16:10.855 "driver_specific": { 00:16:10.855 "raid": { 00:16:10.855 "uuid": "f7173361-0fd4-4381-878a-f3af421b98be", 00:16:10.855 "strip_size_kb": 64, 00:16:10.855 "state": "online", 00:16:10.855 "raid_level": "raid0", 00:16:10.855 "superblock": true, 00:16:10.855 "num_base_bdevs": 4, 00:16:10.855 "num_base_bdevs_discovered": 4, 00:16:10.855 "num_base_bdevs_operational": 4, 00:16:10.855 "base_bdevs_list": [ 00:16:10.855 { 00:16:10.855 "name": "pt1", 00:16:10.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:10.855 "is_configured": true, 00:16:10.855 "data_offset": 2048, 00:16:10.855 "data_size": 63488 00:16:10.855 }, 00:16:10.855 { 00:16:10.855 "name": "pt2", 00:16:10.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.855 "is_configured": true, 00:16:10.855 "data_offset": 2048, 00:16:10.855 "data_size": 63488 00:16:10.855 }, 00:16:10.855 { 00:16:10.855 "name": "pt3", 00:16:10.855 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:10.855 "is_configured": true, 00:16:10.855 "data_offset": 2048, 00:16:10.855 "data_size": 63488 00:16:10.855 }, 00:16:10.855 { 00:16:10.855 "name": "pt4", 00:16:10.855 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:10.855 "is_configured": true, 00:16:10.855 "data_offset": 2048, 00:16:10.855 "data_size": 63488 00:16:10.855 } 00:16:10.855 ] 00:16:10.855 } 00:16:10.855 } 00:16:10.855 }' 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:10.855 pt2 00:16:10.855 pt3 00:16:10.855 pt4' 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.855 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:11.113 [2024-11-06 09:08:47.481278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f7173361-0fd4-4381-878a-f3af421b98be '!=' f7173361-0fd4-4381-878a-f3af421b98be ']' 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70760 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 70760 ']' 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 70760 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70760 00:16:11.113 killing process with pid 70760 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70760' 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 70760 00:16:11.113 09:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 70760 00:16:11.113 [2024-11-06 09:08:47.556981] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.113 [2024-11-06 09:08:47.557089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.113 [2024-11-06 09:08:47.557183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.113 [2024-11-06 09:08:47.557211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:11.680 [2024-11-06 09:08:47.925119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.614 ************************************ 00:16:12.614 END TEST raid_superblock_test 00:16:12.614 ************************************ 00:16:12.614 09:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:12.614 00:16:12.614 real 0m5.905s 00:16:12.614 user 0m8.824s 00:16:12.614 sys 0m0.878s 00:16:12.614 09:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:12.614 09:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 09:08:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:16:12.614 09:08:49 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:12.614 09:08:49 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:12.614 09:08:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 ************************************ 00:16:12.614 START TEST raid_read_error_test 00:16:12.614 ************************************ 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GwgfzhZf8H 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71030 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71030 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 71030 ']' 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:12.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:12.614 09:08:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.614 [2024-11-06 09:08:49.146685] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:16:12.614 [2024-11-06 09:08:49.146891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71030 ] 00:16:12.874 [2024-11-06 09:08:49.333977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.133 [2024-11-06 09:08:49.467090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.389 [2024-11-06 09:08:49.670265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.389 [2024-11-06 09:08:49.670346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.955 BaseBdev1_malloc 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.955 true 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.955 [2024-11-06 09:08:50.247678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:13.955 [2024-11-06 09:08:50.247745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.955 [2024-11-06 09:08:50.247777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:13.955 [2024-11-06 09:08:50.247796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.955 [2024-11-06 09:08:50.250712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.955 [2024-11-06 09:08:50.250760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:13.955 BaseBdev1 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.955 BaseBdev2_malloc 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.955 true 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.955 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.956 [2024-11-06 09:08:50.304114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:13.956 [2024-11-06 09:08:50.304185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.956 [2024-11-06 09:08:50.304213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:13.956 [2024-11-06 09:08:50.304232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.956 [2024-11-06 09:08:50.307104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.956 [2024-11-06 09:08:50.307152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:13.956 BaseBdev2 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.956 BaseBdev3_malloc 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.956 true 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.956 [2024-11-06 09:08:50.368337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:13.956 [2024-11-06 09:08:50.368406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.956 [2024-11-06 09:08:50.368437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:13.956 [2024-11-06 09:08:50.368456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.956 [2024-11-06 09:08:50.371407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.956 [2024-11-06 09:08:50.371454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:13.956 BaseBdev3 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.956 BaseBdev4_malloc 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.956 true 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.956 [2024-11-06 09:08:50.424406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:13.956 [2024-11-06 09:08:50.424471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.956 [2024-11-06 09:08:50.424500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:13.956 [2024-11-06 09:08:50.424519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.956 [2024-11-06 09:08:50.427255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.956 [2024-11-06 09:08:50.427304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:13.956 BaseBdev4 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.956 [2024-11-06 09:08:50.432485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.956 [2024-11-06 09:08:50.434959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.956 [2024-11-06 09:08:50.435075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:13.956 [2024-11-06 09:08:50.435183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:13.956 [2024-11-06 09:08:50.435482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:13.956 [2024-11-06 09:08:50.435522] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:13.956 [2024-11-06 09:08:50.435878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:13.956 [2024-11-06 09:08:50.436104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:13.956 [2024-11-06 09:08:50.436133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:13.956 [2024-11-06 09:08:50.436391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.956 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.214 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.214 "name": "raid_bdev1", 00:16:14.214 "uuid": "7b066984-7f74-4cf0-a7e2-f2732027d88e", 00:16:14.214 "strip_size_kb": 64, 00:16:14.214 "state": "online", 00:16:14.214 "raid_level": "raid0", 00:16:14.214 "superblock": true, 00:16:14.214 "num_base_bdevs": 4, 00:16:14.214 "num_base_bdevs_discovered": 4, 00:16:14.214 "num_base_bdevs_operational": 4, 00:16:14.214 "base_bdevs_list": [ 00:16:14.214 { 00:16:14.214 "name": "BaseBdev1", 00:16:14.214 "uuid": "e29455cf-8a7c-56bb-ab79-4820052c1b8e", 00:16:14.214 "is_configured": true, 00:16:14.214 "data_offset": 2048, 00:16:14.214 "data_size": 63488 00:16:14.214 }, 00:16:14.214 { 00:16:14.214 "name": "BaseBdev2", 00:16:14.215 "uuid": "4895d0d4-ebb3-5fca-ba1a-f9f1ecf49283", 00:16:14.215 "is_configured": true, 00:16:14.215 "data_offset": 2048, 00:16:14.215 "data_size": 63488 00:16:14.215 }, 00:16:14.215 { 00:16:14.215 "name": "BaseBdev3", 00:16:14.215 "uuid": "8ac34154-166e-5ee3-b959-c45440b903dd", 00:16:14.215 "is_configured": true, 00:16:14.215 "data_offset": 2048, 00:16:14.215 "data_size": 63488 00:16:14.215 }, 00:16:14.215 { 00:16:14.215 "name": "BaseBdev4", 00:16:14.215 "uuid": "654f466d-88d3-57b0-a019-e457d7c9c050", 00:16:14.215 "is_configured": true, 00:16:14.215 "data_offset": 2048, 00:16:14.215 "data_size": 63488 00:16:14.215 } 00:16:14.215 ] 00:16:14.215 }' 00:16:14.215 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.215 09:08:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.497 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:14.498 09:08:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:14.760 [2024-11-06 09:08:51.062457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:15.726 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:15.726 09:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.727 "name": "raid_bdev1", 00:16:15.727 "uuid": "7b066984-7f74-4cf0-a7e2-f2732027d88e", 00:16:15.727 "strip_size_kb": 64, 00:16:15.727 "state": "online", 00:16:15.727 "raid_level": "raid0", 00:16:15.727 "superblock": true, 00:16:15.727 "num_base_bdevs": 4, 00:16:15.727 "num_base_bdevs_discovered": 4, 00:16:15.727 "num_base_bdevs_operational": 4, 00:16:15.727 "base_bdevs_list": [ 00:16:15.727 { 00:16:15.727 "name": "BaseBdev1", 00:16:15.727 "uuid": "e29455cf-8a7c-56bb-ab79-4820052c1b8e", 00:16:15.727 "is_configured": true, 00:16:15.727 "data_offset": 2048, 00:16:15.727 "data_size": 63488 00:16:15.727 }, 00:16:15.727 { 00:16:15.727 "name": "BaseBdev2", 00:16:15.727 "uuid": "4895d0d4-ebb3-5fca-ba1a-f9f1ecf49283", 00:16:15.727 "is_configured": true, 00:16:15.727 "data_offset": 2048, 00:16:15.727 "data_size": 63488 00:16:15.727 }, 00:16:15.727 { 00:16:15.727 "name": "BaseBdev3", 00:16:15.727 "uuid": "8ac34154-166e-5ee3-b959-c45440b903dd", 00:16:15.727 "is_configured": true, 00:16:15.727 "data_offset": 2048, 00:16:15.727 "data_size": 63488 00:16:15.727 }, 00:16:15.727 { 00:16:15.727 "name": "BaseBdev4", 00:16:15.727 "uuid": "654f466d-88d3-57b0-a019-e457d7c9c050", 00:16:15.727 "is_configured": true, 00:16:15.727 "data_offset": 2048, 00:16:15.727 "data_size": 63488 00:16:15.727 } 00:16:15.727 ] 00:16:15.727 }' 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.727 09:08:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.001 [2024-11-06 09:08:52.487461] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:16.001 [2024-11-06 09:08:52.487659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:16.001 [2024-11-06 09:08:52.491230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.001 [2024-11-06 09:08:52.491427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.001 [2024-11-06 09:08:52.491607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.001 [2024-11-06 09:08:52.491766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:16.001 { 00:16:16.001 "results": [ 00:16:16.001 { 00:16:16.001 "job": "raid_bdev1", 00:16:16.001 "core_mask": "0x1", 00:16:16.001 "workload": "randrw", 00:16:16.001 "percentage": 50, 00:16:16.001 "status": "finished", 00:16:16.001 "queue_depth": 1, 00:16:16.001 "io_size": 131072, 00:16:16.001 "runtime": 1.422156, 00:16:16.001 "iops": 10334.309316277539, 00:16:16.001 "mibps": 1291.7886645346923, 00:16:16.001 "io_failed": 1, 00:16:16.001 "io_timeout": 0, 00:16:16.001 "avg_latency_us": 135.16280805056965, 00:16:16.001 "min_latency_us": 41.89090909090909, 00:16:16.001 "max_latency_us": 2249.0763636363636 00:16:16.001 } 00:16:16.001 ], 00:16:16.001 "core_count": 1 00:16:16.001 } 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71030 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 71030 ']' 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 71030 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71030 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:16.001 killing process with pid 71030 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71030' 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 71030 00:16:16.001 [2024-11-06 09:08:52.527234] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.001 09:08:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 71030 00:16:16.589 [2024-11-06 09:08:52.817572] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.524 09:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GwgfzhZf8H 00:16:17.524 09:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:17.525 09:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:17.525 09:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:16:17.525 09:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:16:17.525 ************************************ 00:16:17.525 END TEST raid_read_error_test 00:16:17.525 ************************************ 00:16:17.525 09:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:17.525 09:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:17.525 09:08:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:16:17.525 00:16:17.525 real 0m4.885s 00:16:17.525 user 0m6.094s 00:16:17.525 sys 0m0.585s 00:16:17.525 09:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:17.525 09:08:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.525 09:08:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:16:17.525 09:08:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:17.525 09:08:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:17.525 09:08:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:17.525 ************************************ 00:16:17.525 START TEST raid_write_error_test 00:16:17.525 ************************************ 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DwK2ArH7Xv 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71179 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71179 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 71179 ']' 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:17.525 09:08:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.784 [2024-11-06 09:08:54.087933] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:16:17.784 [2024-11-06 09:08:54.088371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71179 ] 00:16:17.784 [2024-11-06 09:08:54.268505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.080 [2024-11-06 09:08:54.399476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.080 [2024-11-06 09:08:54.601312] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.080 [2024-11-06 09:08:54.601392] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.646 BaseBdev1_malloc 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.646 true 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.646 [2024-11-06 09:08:55.084705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:18.646 [2024-11-06 09:08:55.084937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.646 [2024-11-06 09:08:55.085093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:18.646 [2024-11-06 09:08:55.085216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.646 [2024-11-06 09:08:55.088036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.646 [2024-11-06 09:08:55.088087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:18.646 BaseBdev1 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.646 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.647 BaseBdev2_malloc 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.647 true 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.647 [2024-11-06 09:08:55.148577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:18.647 [2024-11-06 09:08:55.148652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.647 [2024-11-06 09:08:55.148680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:18.647 [2024-11-06 09:08:55.148698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.647 [2024-11-06 09:08:55.151476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.647 [2024-11-06 09:08:55.151527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:18.647 BaseBdev2 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.647 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.906 BaseBdev3_malloc 00:16:18.906 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.906 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:18.906 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.906 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.906 true 00:16:18.906 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.906 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:18.906 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.906 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.906 [2024-11-06 09:08:55.224616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:18.907 [2024-11-06 09:08:55.224823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.907 [2024-11-06 09:08:55.224898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:18.907 [2024-11-06 09:08:55.225012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.907 [2024-11-06 09:08:55.227859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.907 [2024-11-06 09:08:55.228019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:18.907 BaseBdev3 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.907 BaseBdev4_malloc 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.907 true 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.907 [2024-11-06 09:08:55.284447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:18.907 [2024-11-06 09:08:55.284644] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.907 [2024-11-06 09:08:55.284775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:18.907 [2024-11-06 09:08:55.284901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.907 [2024-11-06 09:08:55.287807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.907 [2024-11-06 09:08:55.287989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:18.907 BaseBdev4 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.907 [2024-11-06 09:08:55.296547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.907 [2024-11-06 09:08:55.299160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.907 [2024-11-06 09:08:55.299382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.907 [2024-11-06 09:08:55.299534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:18.907 [2024-11-06 09:08:55.299932] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:18.907 [2024-11-06 09:08:55.300064] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:18.907 [2024-11-06 09:08:55.300444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:18.907 [2024-11-06 09:08:55.300788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:18.907 [2024-11-06 09:08:55.300928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:18.907 [2024-11-06 09:08:55.301378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.907 "name": "raid_bdev1", 00:16:18.907 "uuid": "1cd5f75b-b84f-4d13-9746-382066eff4c1", 00:16:18.907 "strip_size_kb": 64, 00:16:18.907 "state": "online", 00:16:18.907 "raid_level": "raid0", 00:16:18.907 "superblock": true, 00:16:18.907 "num_base_bdevs": 4, 00:16:18.907 "num_base_bdevs_discovered": 4, 00:16:18.907 "num_base_bdevs_operational": 4, 00:16:18.907 "base_bdevs_list": [ 00:16:18.907 { 00:16:18.907 "name": "BaseBdev1", 00:16:18.907 "uuid": "a0f3306d-96ca-558b-9a9e-5543bca556d0", 00:16:18.907 "is_configured": true, 00:16:18.907 "data_offset": 2048, 00:16:18.907 "data_size": 63488 00:16:18.907 }, 00:16:18.907 { 00:16:18.907 "name": "BaseBdev2", 00:16:18.907 "uuid": "e22c2dcd-3b1d-5d1b-9f22-4585a31a89b5", 00:16:18.907 "is_configured": true, 00:16:18.907 "data_offset": 2048, 00:16:18.907 "data_size": 63488 00:16:18.907 }, 00:16:18.907 { 00:16:18.907 "name": "BaseBdev3", 00:16:18.907 "uuid": "b737d341-d2ba-5d3e-8970-7a9ad2f55c1f", 00:16:18.907 "is_configured": true, 00:16:18.907 "data_offset": 2048, 00:16:18.907 "data_size": 63488 00:16:18.907 }, 00:16:18.907 { 00:16:18.907 "name": "BaseBdev4", 00:16:18.907 "uuid": "ea3ecf88-049b-55c6-a571-04186bc93212", 00:16:18.907 "is_configured": true, 00:16:18.907 "data_offset": 2048, 00:16:18.907 "data_size": 63488 00:16:18.907 } 00:16:18.907 ] 00:16:18.907 }' 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.907 09:08:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.474 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:19.474 09:08:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:19.474 [2024-11-06 09:08:55.954908] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.409 "name": "raid_bdev1", 00:16:20.409 "uuid": "1cd5f75b-b84f-4d13-9746-382066eff4c1", 00:16:20.409 "strip_size_kb": 64, 00:16:20.409 "state": "online", 00:16:20.409 "raid_level": "raid0", 00:16:20.409 "superblock": true, 00:16:20.409 "num_base_bdevs": 4, 00:16:20.409 "num_base_bdevs_discovered": 4, 00:16:20.409 "num_base_bdevs_operational": 4, 00:16:20.409 "base_bdevs_list": [ 00:16:20.409 { 00:16:20.409 "name": "BaseBdev1", 00:16:20.409 "uuid": "a0f3306d-96ca-558b-9a9e-5543bca556d0", 00:16:20.409 "is_configured": true, 00:16:20.409 "data_offset": 2048, 00:16:20.409 "data_size": 63488 00:16:20.409 }, 00:16:20.409 { 00:16:20.409 "name": "BaseBdev2", 00:16:20.409 "uuid": "e22c2dcd-3b1d-5d1b-9f22-4585a31a89b5", 00:16:20.409 "is_configured": true, 00:16:20.409 "data_offset": 2048, 00:16:20.409 "data_size": 63488 00:16:20.409 }, 00:16:20.409 { 00:16:20.409 "name": "BaseBdev3", 00:16:20.409 "uuid": "b737d341-d2ba-5d3e-8970-7a9ad2f55c1f", 00:16:20.409 "is_configured": true, 00:16:20.409 "data_offset": 2048, 00:16:20.409 "data_size": 63488 00:16:20.409 }, 00:16:20.409 { 00:16:20.409 "name": "BaseBdev4", 00:16:20.409 "uuid": "ea3ecf88-049b-55c6-a571-04186bc93212", 00:16:20.409 "is_configured": true, 00:16:20.409 "data_offset": 2048, 00:16:20.409 "data_size": 63488 00:16:20.409 } 00:16:20.409 ] 00:16:20.409 }' 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.409 09:08:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.978 [2024-11-06 09:08:57.357893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:20.978 [2024-11-06 09:08:57.357931] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.978 [2024-11-06 09:08:57.361347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.978 [2024-11-06 09:08:57.361538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.978 [2024-11-06 09:08:57.361643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.978 [2024-11-06 09:08:57.361849] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:20.978 { 00:16:20.978 "results": [ 00:16:20.978 { 00:16:20.978 "job": "raid_bdev1", 00:16:20.978 "core_mask": "0x1", 00:16:20.978 "workload": "randrw", 00:16:20.978 "percentage": 50, 00:16:20.978 "status": "finished", 00:16:20.978 "queue_depth": 1, 00:16:20.978 "io_size": 131072, 00:16:20.978 "runtime": 1.400526, 00:16:20.978 "iops": 10686.69914018019, 00:16:20.978 "mibps": 1335.8373925225237, 00:16:20.978 "io_failed": 1, 00:16:20.978 "io_timeout": 0, 00:16:20.978 "avg_latency_us": 130.78137699820223, 00:16:20.978 "min_latency_us": 43.054545454545455, 00:16:20.978 "max_latency_us": 1839.4763636363637 00:16:20.978 } 00:16:20.978 ], 00:16:20.978 "core_count": 1 00:16:20.978 } 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71179 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 71179 ']' 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 71179 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71179 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:20.978 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71179' 00:16:20.979 killing process with pid 71179 00:16:20.979 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 71179 00:16:20.979 [2024-11-06 09:08:57.398308] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.979 09:08:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 71179 00:16:21.237 [2024-11-06 09:08:57.693516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:22.609 09:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DwK2ArH7Xv 00:16:22.609 09:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:22.609 09:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:22.609 09:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:16:22.609 09:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:16:22.609 09:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:22.609 09:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:22.609 09:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:16:22.609 00:16:22.609 real 0m4.807s 00:16:22.609 user 0m5.943s 00:16:22.609 sys 0m0.580s 00:16:22.609 ************************************ 00:16:22.609 END TEST raid_write_error_test 00:16:22.609 ************************************ 00:16:22.609 09:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:22.609 09:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.609 09:08:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:16:22.610 09:08:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:16:22.610 09:08:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:22.610 09:08:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:22.610 09:08:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.610 ************************************ 00:16:22.610 START TEST raid_state_function_test 00:16:22.610 ************************************ 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71327 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:22.610 Process raid pid: 71327 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71327' 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71327 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 71327 ']' 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:22.610 09:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.610 [2024-11-06 09:08:58.934461] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:16:22.610 [2024-11-06 09:08:58.934865] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.610 [2024-11-06 09:08:59.126366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.869 [2024-11-06 09:08:59.287420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.127 [2024-11-06 09:08:59.525572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.127 [2024-11-06 09:08:59.525625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.692 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:23.692 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:16:23.692 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:23.692 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.692 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.692 [2024-11-06 09:08:59.927705] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.692 [2024-11-06 09:08:59.927984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.692 [2024-11-06 09:08:59.928024] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.692 [2024-11-06 09:08:59.928046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.692 [2024-11-06 09:08:59.928058] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.692 [2024-11-06 09:08:59.928072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.692 [2024-11-06 09:08:59.928083] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:23.692 [2024-11-06 09:08:59.928097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:23.692 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.692 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:23.692 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:23.692 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:23.692 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.693 "name": "Existed_Raid", 00:16:23.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.693 "strip_size_kb": 64, 00:16:23.693 "state": "configuring", 00:16:23.693 "raid_level": "concat", 00:16:23.693 "superblock": false, 00:16:23.693 "num_base_bdevs": 4, 00:16:23.693 "num_base_bdevs_discovered": 0, 00:16:23.693 "num_base_bdevs_operational": 4, 00:16:23.693 "base_bdevs_list": [ 00:16:23.693 { 00:16:23.693 "name": "BaseBdev1", 00:16:23.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.693 "is_configured": false, 00:16:23.693 "data_offset": 0, 00:16:23.693 "data_size": 0 00:16:23.693 }, 00:16:23.693 { 00:16:23.693 "name": "BaseBdev2", 00:16:23.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.693 "is_configured": false, 00:16:23.693 "data_offset": 0, 00:16:23.693 "data_size": 0 00:16:23.693 }, 00:16:23.693 { 00:16:23.693 "name": "BaseBdev3", 00:16:23.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.693 "is_configured": false, 00:16:23.693 "data_offset": 0, 00:16:23.693 "data_size": 0 00:16:23.693 }, 00:16:23.693 { 00:16:23.693 "name": "BaseBdev4", 00:16:23.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.693 "is_configured": false, 00:16:23.693 "data_offset": 0, 00:16:23.693 "data_size": 0 00:16:23.693 } 00:16:23.693 ] 00:16:23.693 }' 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.693 09:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.951 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:23.951 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.951 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.951 [2024-11-06 09:09:00.423759] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:23.951 [2024-11-06 09:09:00.423833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:23.951 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.951 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:23.951 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.951 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.951 [2024-11-06 09:09:00.431764] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.951 [2024-11-06 09:09:00.431831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.951 [2024-11-06 09:09:00.431848] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.951 [2024-11-06 09:09:00.431864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.951 [2024-11-06 09:09:00.431874] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.951 [2024-11-06 09:09:00.431889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.952 [2024-11-06 09:09:00.431899] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:23.952 [2024-11-06 09:09:00.431912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.952 [2024-11-06 09:09:00.476568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.952 BaseBdev1 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.952 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.210 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.210 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:24.210 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.210 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.210 [ 00:16:24.210 { 00:16:24.210 "name": "BaseBdev1", 00:16:24.210 "aliases": [ 00:16:24.210 "477913ca-6cb9-41d7-8a54-e3b57df341e9" 00:16:24.210 ], 00:16:24.210 "product_name": "Malloc disk", 00:16:24.210 "block_size": 512, 00:16:24.210 "num_blocks": 65536, 00:16:24.210 "uuid": "477913ca-6cb9-41d7-8a54-e3b57df341e9", 00:16:24.210 "assigned_rate_limits": { 00:16:24.210 "rw_ios_per_sec": 0, 00:16:24.210 "rw_mbytes_per_sec": 0, 00:16:24.210 "r_mbytes_per_sec": 0, 00:16:24.210 "w_mbytes_per_sec": 0 00:16:24.210 }, 00:16:24.210 "claimed": true, 00:16:24.210 "claim_type": "exclusive_write", 00:16:24.210 "zoned": false, 00:16:24.210 "supported_io_types": { 00:16:24.210 "read": true, 00:16:24.210 "write": true, 00:16:24.211 "unmap": true, 00:16:24.211 "flush": true, 00:16:24.211 "reset": true, 00:16:24.211 "nvme_admin": false, 00:16:24.211 "nvme_io": false, 00:16:24.211 "nvme_io_md": false, 00:16:24.211 "write_zeroes": true, 00:16:24.211 "zcopy": true, 00:16:24.211 "get_zone_info": false, 00:16:24.211 "zone_management": false, 00:16:24.211 "zone_append": false, 00:16:24.211 "compare": false, 00:16:24.211 "compare_and_write": false, 00:16:24.211 "abort": true, 00:16:24.211 "seek_hole": false, 00:16:24.211 "seek_data": false, 00:16:24.211 "copy": true, 00:16:24.211 "nvme_iov_md": false 00:16:24.211 }, 00:16:24.211 "memory_domains": [ 00:16:24.211 { 00:16:24.211 "dma_device_id": "system", 00:16:24.211 "dma_device_type": 1 00:16:24.211 }, 00:16:24.211 { 00:16:24.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.211 "dma_device_type": 2 00:16:24.211 } 00:16:24.211 ], 00:16:24.211 "driver_specific": {} 00:16:24.211 } 00:16:24.211 ] 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.211 "name": "Existed_Raid", 00:16:24.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.211 "strip_size_kb": 64, 00:16:24.211 "state": "configuring", 00:16:24.211 "raid_level": "concat", 00:16:24.211 "superblock": false, 00:16:24.211 "num_base_bdevs": 4, 00:16:24.211 "num_base_bdevs_discovered": 1, 00:16:24.211 "num_base_bdevs_operational": 4, 00:16:24.211 "base_bdevs_list": [ 00:16:24.211 { 00:16:24.211 "name": "BaseBdev1", 00:16:24.211 "uuid": "477913ca-6cb9-41d7-8a54-e3b57df341e9", 00:16:24.211 "is_configured": true, 00:16:24.211 "data_offset": 0, 00:16:24.211 "data_size": 65536 00:16:24.211 }, 00:16:24.211 { 00:16:24.211 "name": "BaseBdev2", 00:16:24.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.211 "is_configured": false, 00:16:24.211 "data_offset": 0, 00:16:24.211 "data_size": 0 00:16:24.211 }, 00:16:24.211 { 00:16:24.211 "name": "BaseBdev3", 00:16:24.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.211 "is_configured": false, 00:16:24.211 "data_offset": 0, 00:16:24.211 "data_size": 0 00:16:24.211 }, 00:16:24.211 { 00:16:24.211 "name": "BaseBdev4", 00:16:24.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.211 "is_configured": false, 00:16:24.211 "data_offset": 0, 00:16:24.211 "data_size": 0 00:16:24.211 } 00:16:24.211 ] 00:16:24.211 }' 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.211 09:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.777 [2024-11-06 09:09:01.012789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:24.777 [2024-11-06 09:09:01.012867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.777 [2024-11-06 09:09:01.020891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.777 [2024-11-06 09:09:01.023376] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.777 [2024-11-06 09:09:01.023442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.777 [2024-11-06 09:09:01.023461] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.777 [2024-11-06 09:09:01.023479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.777 [2024-11-06 09:09:01.023489] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:24.777 [2024-11-06 09:09:01.023503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.777 "name": "Existed_Raid", 00:16:24.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.777 "strip_size_kb": 64, 00:16:24.777 "state": "configuring", 00:16:24.777 "raid_level": "concat", 00:16:24.777 "superblock": false, 00:16:24.777 "num_base_bdevs": 4, 00:16:24.777 "num_base_bdevs_discovered": 1, 00:16:24.777 "num_base_bdevs_operational": 4, 00:16:24.777 "base_bdevs_list": [ 00:16:24.777 { 00:16:24.777 "name": "BaseBdev1", 00:16:24.777 "uuid": "477913ca-6cb9-41d7-8a54-e3b57df341e9", 00:16:24.777 "is_configured": true, 00:16:24.777 "data_offset": 0, 00:16:24.777 "data_size": 65536 00:16:24.777 }, 00:16:24.777 { 00:16:24.777 "name": "BaseBdev2", 00:16:24.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.777 "is_configured": false, 00:16:24.777 "data_offset": 0, 00:16:24.777 "data_size": 0 00:16:24.777 }, 00:16:24.777 { 00:16:24.777 "name": "BaseBdev3", 00:16:24.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.777 "is_configured": false, 00:16:24.777 "data_offset": 0, 00:16:24.777 "data_size": 0 00:16:24.777 }, 00:16:24.777 { 00:16:24.777 "name": "BaseBdev4", 00:16:24.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.777 "is_configured": false, 00:16:24.777 "data_offset": 0, 00:16:24.777 "data_size": 0 00:16:24.777 } 00:16:24.777 ] 00:16:24.777 }' 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.777 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.035 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:25.035 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.035 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.035 [2024-11-06 09:09:01.567346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.035 BaseBdev2 00:16:25.035 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.035 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:25.035 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:25.035 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:25.035 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:25.035 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:25.035 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:25.035 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:25.035 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.036 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.293 [ 00:16:25.293 { 00:16:25.293 "name": "BaseBdev2", 00:16:25.293 "aliases": [ 00:16:25.293 "79c8e026-076b-406e-b96b-2df53b094496" 00:16:25.293 ], 00:16:25.293 "product_name": "Malloc disk", 00:16:25.293 "block_size": 512, 00:16:25.293 "num_blocks": 65536, 00:16:25.293 "uuid": "79c8e026-076b-406e-b96b-2df53b094496", 00:16:25.293 "assigned_rate_limits": { 00:16:25.293 "rw_ios_per_sec": 0, 00:16:25.293 "rw_mbytes_per_sec": 0, 00:16:25.293 "r_mbytes_per_sec": 0, 00:16:25.293 "w_mbytes_per_sec": 0 00:16:25.293 }, 00:16:25.293 "claimed": true, 00:16:25.293 "claim_type": "exclusive_write", 00:16:25.293 "zoned": false, 00:16:25.293 "supported_io_types": { 00:16:25.293 "read": true, 00:16:25.293 "write": true, 00:16:25.293 "unmap": true, 00:16:25.293 "flush": true, 00:16:25.293 "reset": true, 00:16:25.293 "nvme_admin": false, 00:16:25.293 "nvme_io": false, 00:16:25.293 "nvme_io_md": false, 00:16:25.293 "write_zeroes": true, 00:16:25.293 "zcopy": true, 00:16:25.293 "get_zone_info": false, 00:16:25.293 "zone_management": false, 00:16:25.293 "zone_append": false, 00:16:25.293 "compare": false, 00:16:25.293 "compare_and_write": false, 00:16:25.293 "abort": true, 00:16:25.293 "seek_hole": false, 00:16:25.293 "seek_data": false, 00:16:25.293 "copy": true, 00:16:25.293 "nvme_iov_md": false 00:16:25.293 }, 00:16:25.293 "memory_domains": [ 00:16:25.293 { 00:16:25.293 "dma_device_id": "system", 00:16:25.293 "dma_device_type": 1 00:16:25.293 }, 00:16:25.293 { 00:16:25.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.293 "dma_device_type": 2 00:16:25.293 } 00:16:25.293 ], 00:16:25.293 "driver_specific": {} 00:16:25.293 } 00:16:25.293 ] 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.293 "name": "Existed_Raid", 00:16:25.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.293 "strip_size_kb": 64, 00:16:25.293 "state": "configuring", 00:16:25.293 "raid_level": "concat", 00:16:25.293 "superblock": false, 00:16:25.293 "num_base_bdevs": 4, 00:16:25.293 "num_base_bdevs_discovered": 2, 00:16:25.293 "num_base_bdevs_operational": 4, 00:16:25.293 "base_bdevs_list": [ 00:16:25.293 { 00:16:25.293 "name": "BaseBdev1", 00:16:25.293 "uuid": "477913ca-6cb9-41d7-8a54-e3b57df341e9", 00:16:25.293 "is_configured": true, 00:16:25.293 "data_offset": 0, 00:16:25.293 "data_size": 65536 00:16:25.293 }, 00:16:25.293 { 00:16:25.293 "name": "BaseBdev2", 00:16:25.293 "uuid": "79c8e026-076b-406e-b96b-2df53b094496", 00:16:25.293 "is_configured": true, 00:16:25.293 "data_offset": 0, 00:16:25.293 "data_size": 65536 00:16:25.293 }, 00:16:25.293 { 00:16:25.293 "name": "BaseBdev3", 00:16:25.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.293 "is_configured": false, 00:16:25.293 "data_offset": 0, 00:16:25.293 "data_size": 0 00:16:25.293 }, 00:16:25.293 { 00:16:25.293 "name": "BaseBdev4", 00:16:25.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.293 "is_configured": false, 00:16:25.293 "data_offset": 0, 00:16:25.293 "data_size": 0 00:16:25.293 } 00:16:25.293 ] 00:16:25.293 }' 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.293 09:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.861 [2024-11-06 09:09:02.164211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.861 BaseBdev3 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.861 [ 00:16:25.861 { 00:16:25.861 "name": "BaseBdev3", 00:16:25.861 "aliases": [ 00:16:25.861 "2cdc48ce-d697-40b3-b385-de8e121ba210" 00:16:25.861 ], 00:16:25.861 "product_name": "Malloc disk", 00:16:25.861 "block_size": 512, 00:16:25.861 "num_blocks": 65536, 00:16:25.861 "uuid": "2cdc48ce-d697-40b3-b385-de8e121ba210", 00:16:25.861 "assigned_rate_limits": { 00:16:25.861 "rw_ios_per_sec": 0, 00:16:25.861 "rw_mbytes_per_sec": 0, 00:16:25.861 "r_mbytes_per_sec": 0, 00:16:25.861 "w_mbytes_per_sec": 0 00:16:25.861 }, 00:16:25.861 "claimed": true, 00:16:25.861 "claim_type": "exclusive_write", 00:16:25.861 "zoned": false, 00:16:25.861 "supported_io_types": { 00:16:25.861 "read": true, 00:16:25.861 "write": true, 00:16:25.861 "unmap": true, 00:16:25.861 "flush": true, 00:16:25.861 "reset": true, 00:16:25.861 "nvme_admin": false, 00:16:25.861 "nvme_io": false, 00:16:25.861 "nvme_io_md": false, 00:16:25.861 "write_zeroes": true, 00:16:25.861 "zcopy": true, 00:16:25.861 "get_zone_info": false, 00:16:25.861 "zone_management": false, 00:16:25.861 "zone_append": false, 00:16:25.861 "compare": false, 00:16:25.861 "compare_and_write": false, 00:16:25.861 "abort": true, 00:16:25.861 "seek_hole": false, 00:16:25.861 "seek_data": false, 00:16:25.861 "copy": true, 00:16:25.861 "nvme_iov_md": false 00:16:25.861 }, 00:16:25.861 "memory_domains": [ 00:16:25.861 { 00:16:25.861 "dma_device_id": "system", 00:16:25.861 "dma_device_type": 1 00:16:25.861 }, 00:16:25.861 { 00:16:25.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.861 "dma_device_type": 2 00:16:25.861 } 00:16:25.861 ], 00:16:25.861 "driver_specific": {} 00:16:25.861 } 00:16:25.861 ] 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.861 "name": "Existed_Raid", 00:16:25.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.861 "strip_size_kb": 64, 00:16:25.861 "state": "configuring", 00:16:25.861 "raid_level": "concat", 00:16:25.861 "superblock": false, 00:16:25.861 "num_base_bdevs": 4, 00:16:25.861 "num_base_bdevs_discovered": 3, 00:16:25.861 "num_base_bdevs_operational": 4, 00:16:25.861 "base_bdevs_list": [ 00:16:25.861 { 00:16:25.861 "name": "BaseBdev1", 00:16:25.861 "uuid": "477913ca-6cb9-41d7-8a54-e3b57df341e9", 00:16:25.861 "is_configured": true, 00:16:25.861 "data_offset": 0, 00:16:25.861 "data_size": 65536 00:16:25.861 }, 00:16:25.861 { 00:16:25.861 "name": "BaseBdev2", 00:16:25.861 "uuid": "79c8e026-076b-406e-b96b-2df53b094496", 00:16:25.861 "is_configured": true, 00:16:25.861 "data_offset": 0, 00:16:25.861 "data_size": 65536 00:16:25.861 }, 00:16:25.861 { 00:16:25.861 "name": "BaseBdev3", 00:16:25.861 "uuid": "2cdc48ce-d697-40b3-b385-de8e121ba210", 00:16:25.861 "is_configured": true, 00:16:25.861 "data_offset": 0, 00:16:25.861 "data_size": 65536 00:16:25.861 }, 00:16:25.861 { 00:16:25.861 "name": "BaseBdev4", 00:16:25.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.861 "is_configured": false, 00:16:25.861 "data_offset": 0, 00:16:25.861 "data_size": 0 00:16:25.861 } 00:16:25.861 ] 00:16:25.861 }' 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.861 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.428 [2024-11-06 09:09:02.746811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:26.428 [2024-11-06 09:09:02.747154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:26.428 [2024-11-06 09:09:02.747179] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:26.428 [2024-11-06 09:09:02.747541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:26.428 [2024-11-06 09:09:02.747761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:26.428 [2024-11-06 09:09:02.747783] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:26.428 [2024-11-06 09:09:02.748113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.428 BaseBdev4 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.428 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.428 [ 00:16:26.428 { 00:16:26.428 "name": "BaseBdev4", 00:16:26.428 "aliases": [ 00:16:26.428 "1a734acd-cb8e-46a5-8be3-8b7ee10e7780" 00:16:26.428 ], 00:16:26.428 "product_name": "Malloc disk", 00:16:26.428 "block_size": 512, 00:16:26.428 "num_blocks": 65536, 00:16:26.428 "uuid": "1a734acd-cb8e-46a5-8be3-8b7ee10e7780", 00:16:26.428 "assigned_rate_limits": { 00:16:26.428 "rw_ios_per_sec": 0, 00:16:26.428 "rw_mbytes_per_sec": 0, 00:16:26.428 "r_mbytes_per_sec": 0, 00:16:26.428 "w_mbytes_per_sec": 0 00:16:26.428 }, 00:16:26.428 "claimed": true, 00:16:26.428 "claim_type": "exclusive_write", 00:16:26.428 "zoned": false, 00:16:26.429 "supported_io_types": { 00:16:26.429 "read": true, 00:16:26.429 "write": true, 00:16:26.429 "unmap": true, 00:16:26.429 "flush": true, 00:16:26.429 "reset": true, 00:16:26.429 "nvme_admin": false, 00:16:26.429 "nvme_io": false, 00:16:26.429 "nvme_io_md": false, 00:16:26.429 "write_zeroes": true, 00:16:26.429 "zcopy": true, 00:16:26.429 "get_zone_info": false, 00:16:26.429 "zone_management": false, 00:16:26.429 "zone_append": false, 00:16:26.429 "compare": false, 00:16:26.429 "compare_and_write": false, 00:16:26.429 "abort": true, 00:16:26.429 "seek_hole": false, 00:16:26.429 "seek_data": false, 00:16:26.429 "copy": true, 00:16:26.429 "nvme_iov_md": false 00:16:26.429 }, 00:16:26.429 "memory_domains": [ 00:16:26.429 { 00:16:26.429 "dma_device_id": "system", 00:16:26.429 "dma_device_type": 1 00:16:26.429 }, 00:16:26.429 { 00:16:26.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.429 "dma_device_type": 2 00:16:26.429 } 00:16:26.429 ], 00:16:26.429 "driver_specific": {} 00:16:26.429 } 00:16:26.429 ] 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.429 "name": "Existed_Raid", 00:16:26.429 "uuid": "ecb6f8f5-529e-4423-841e-e418f682f0e1", 00:16:26.429 "strip_size_kb": 64, 00:16:26.429 "state": "online", 00:16:26.429 "raid_level": "concat", 00:16:26.429 "superblock": false, 00:16:26.429 "num_base_bdevs": 4, 00:16:26.429 "num_base_bdevs_discovered": 4, 00:16:26.429 "num_base_bdevs_operational": 4, 00:16:26.429 "base_bdevs_list": [ 00:16:26.429 { 00:16:26.429 "name": "BaseBdev1", 00:16:26.429 "uuid": "477913ca-6cb9-41d7-8a54-e3b57df341e9", 00:16:26.429 "is_configured": true, 00:16:26.429 "data_offset": 0, 00:16:26.429 "data_size": 65536 00:16:26.429 }, 00:16:26.429 { 00:16:26.429 "name": "BaseBdev2", 00:16:26.429 "uuid": "79c8e026-076b-406e-b96b-2df53b094496", 00:16:26.429 "is_configured": true, 00:16:26.429 "data_offset": 0, 00:16:26.429 "data_size": 65536 00:16:26.429 }, 00:16:26.429 { 00:16:26.429 "name": "BaseBdev3", 00:16:26.429 "uuid": "2cdc48ce-d697-40b3-b385-de8e121ba210", 00:16:26.429 "is_configured": true, 00:16:26.429 "data_offset": 0, 00:16:26.429 "data_size": 65536 00:16:26.429 }, 00:16:26.429 { 00:16:26.429 "name": "BaseBdev4", 00:16:26.429 "uuid": "1a734acd-cb8e-46a5-8be3-8b7ee10e7780", 00:16:26.429 "is_configured": true, 00:16:26.429 "data_offset": 0, 00:16:26.429 "data_size": 65536 00:16:26.429 } 00:16:26.429 ] 00:16:26.429 }' 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.429 09:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:26.997 [2024-11-06 09:09:03.247501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:26.997 "name": "Existed_Raid", 00:16:26.997 "aliases": [ 00:16:26.997 "ecb6f8f5-529e-4423-841e-e418f682f0e1" 00:16:26.997 ], 00:16:26.997 "product_name": "Raid Volume", 00:16:26.997 "block_size": 512, 00:16:26.997 "num_blocks": 262144, 00:16:26.997 "uuid": "ecb6f8f5-529e-4423-841e-e418f682f0e1", 00:16:26.997 "assigned_rate_limits": { 00:16:26.997 "rw_ios_per_sec": 0, 00:16:26.997 "rw_mbytes_per_sec": 0, 00:16:26.997 "r_mbytes_per_sec": 0, 00:16:26.997 "w_mbytes_per_sec": 0 00:16:26.997 }, 00:16:26.997 "claimed": false, 00:16:26.997 "zoned": false, 00:16:26.997 "supported_io_types": { 00:16:26.997 "read": true, 00:16:26.997 "write": true, 00:16:26.997 "unmap": true, 00:16:26.997 "flush": true, 00:16:26.997 "reset": true, 00:16:26.997 "nvme_admin": false, 00:16:26.997 "nvme_io": false, 00:16:26.997 "nvme_io_md": false, 00:16:26.997 "write_zeroes": true, 00:16:26.997 "zcopy": false, 00:16:26.997 "get_zone_info": false, 00:16:26.997 "zone_management": false, 00:16:26.997 "zone_append": false, 00:16:26.997 "compare": false, 00:16:26.997 "compare_and_write": false, 00:16:26.997 "abort": false, 00:16:26.997 "seek_hole": false, 00:16:26.997 "seek_data": false, 00:16:26.997 "copy": false, 00:16:26.997 "nvme_iov_md": false 00:16:26.997 }, 00:16:26.997 "memory_domains": [ 00:16:26.997 { 00:16:26.997 "dma_device_id": "system", 00:16:26.997 "dma_device_type": 1 00:16:26.997 }, 00:16:26.997 { 00:16:26.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.997 "dma_device_type": 2 00:16:26.997 }, 00:16:26.997 { 00:16:26.997 "dma_device_id": "system", 00:16:26.997 "dma_device_type": 1 00:16:26.997 }, 00:16:26.997 { 00:16:26.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.997 "dma_device_type": 2 00:16:26.997 }, 00:16:26.997 { 00:16:26.997 "dma_device_id": "system", 00:16:26.997 "dma_device_type": 1 00:16:26.997 }, 00:16:26.997 { 00:16:26.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.997 "dma_device_type": 2 00:16:26.997 }, 00:16:26.997 { 00:16:26.997 "dma_device_id": "system", 00:16:26.997 "dma_device_type": 1 00:16:26.997 }, 00:16:26.997 { 00:16:26.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.997 "dma_device_type": 2 00:16:26.997 } 00:16:26.997 ], 00:16:26.997 "driver_specific": { 00:16:26.997 "raid": { 00:16:26.997 "uuid": "ecb6f8f5-529e-4423-841e-e418f682f0e1", 00:16:26.997 "strip_size_kb": 64, 00:16:26.997 "state": "online", 00:16:26.997 "raid_level": "concat", 00:16:26.997 "superblock": false, 00:16:26.997 "num_base_bdevs": 4, 00:16:26.997 "num_base_bdevs_discovered": 4, 00:16:26.997 "num_base_bdevs_operational": 4, 00:16:26.997 "base_bdevs_list": [ 00:16:26.997 { 00:16:26.997 "name": "BaseBdev1", 00:16:26.997 "uuid": "477913ca-6cb9-41d7-8a54-e3b57df341e9", 00:16:26.997 "is_configured": true, 00:16:26.997 "data_offset": 0, 00:16:26.997 "data_size": 65536 00:16:26.997 }, 00:16:26.997 { 00:16:26.997 "name": "BaseBdev2", 00:16:26.997 "uuid": "79c8e026-076b-406e-b96b-2df53b094496", 00:16:26.997 "is_configured": true, 00:16:26.997 "data_offset": 0, 00:16:26.997 "data_size": 65536 00:16:26.997 }, 00:16:26.997 { 00:16:26.997 "name": "BaseBdev3", 00:16:26.997 "uuid": "2cdc48ce-d697-40b3-b385-de8e121ba210", 00:16:26.997 "is_configured": true, 00:16:26.997 "data_offset": 0, 00:16:26.997 "data_size": 65536 00:16:26.997 }, 00:16:26.997 { 00:16:26.997 "name": "BaseBdev4", 00:16:26.997 "uuid": "1a734acd-cb8e-46a5-8be3-8b7ee10e7780", 00:16:26.997 "is_configured": true, 00:16:26.997 "data_offset": 0, 00:16:26.997 "data_size": 65536 00:16:26.997 } 00:16:26.997 ] 00:16:26.997 } 00:16:26.997 } 00:16:26.997 }' 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:26.997 BaseBdev2 00:16:26.997 BaseBdev3 00:16:26.997 BaseBdev4' 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.997 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.998 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.998 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.998 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:26.998 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.998 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.998 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.257 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.257 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.257 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.257 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:27.257 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.257 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.257 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.257 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.257 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.257 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.258 [2024-11-06 09:09:03.595330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.258 [2024-11-06 09:09:03.595703] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.258 [2024-11-06 09:09:03.595854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.258 "name": "Existed_Raid", 00:16:27.258 "uuid": "ecb6f8f5-529e-4423-841e-e418f682f0e1", 00:16:27.258 "strip_size_kb": 64, 00:16:27.258 "state": "offline", 00:16:27.258 "raid_level": "concat", 00:16:27.258 "superblock": false, 00:16:27.258 "num_base_bdevs": 4, 00:16:27.258 "num_base_bdevs_discovered": 3, 00:16:27.258 "num_base_bdevs_operational": 3, 00:16:27.258 "base_bdevs_list": [ 00:16:27.258 { 00:16:27.258 "name": null, 00:16:27.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.258 "is_configured": false, 00:16:27.258 "data_offset": 0, 00:16:27.258 "data_size": 65536 00:16:27.258 }, 00:16:27.258 { 00:16:27.258 "name": "BaseBdev2", 00:16:27.258 "uuid": "79c8e026-076b-406e-b96b-2df53b094496", 00:16:27.258 "is_configured": true, 00:16:27.258 "data_offset": 0, 00:16:27.258 "data_size": 65536 00:16:27.258 }, 00:16:27.258 { 00:16:27.258 "name": "BaseBdev3", 00:16:27.258 "uuid": "2cdc48ce-d697-40b3-b385-de8e121ba210", 00:16:27.258 "is_configured": true, 00:16:27.258 "data_offset": 0, 00:16:27.258 "data_size": 65536 00:16:27.258 }, 00:16:27.258 { 00:16:27.258 "name": "BaseBdev4", 00:16:27.258 "uuid": "1a734acd-cb8e-46a5-8be3-8b7ee10e7780", 00:16:27.258 "is_configured": true, 00:16:27.258 "data_offset": 0, 00:16:27.258 "data_size": 65536 00:16:27.258 } 00:16:27.258 ] 00:16:27.258 }' 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.258 09:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.825 [2024-11-06 09:09:04.235914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.825 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.084 [2024-11-06 09:09:04.382331] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.084 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.084 [2024-11-06 09:09:04.529306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:28.084 [2024-11-06 09:09:04.529399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 BaseBdev2 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 [ 00:16:28.343 { 00:16:28.343 "name": "BaseBdev2", 00:16:28.343 "aliases": [ 00:16:28.343 "b08eb476-77b5-4ef6-895d-83e9df06ee84" 00:16:28.343 ], 00:16:28.343 "product_name": "Malloc disk", 00:16:28.343 "block_size": 512, 00:16:28.343 "num_blocks": 65536, 00:16:28.343 "uuid": "b08eb476-77b5-4ef6-895d-83e9df06ee84", 00:16:28.343 "assigned_rate_limits": { 00:16:28.343 "rw_ios_per_sec": 0, 00:16:28.343 "rw_mbytes_per_sec": 0, 00:16:28.343 "r_mbytes_per_sec": 0, 00:16:28.343 "w_mbytes_per_sec": 0 00:16:28.343 }, 00:16:28.343 "claimed": false, 00:16:28.343 "zoned": false, 00:16:28.343 "supported_io_types": { 00:16:28.343 "read": true, 00:16:28.343 "write": true, 00:16:28.343 "unmap": true, 00:16:28.343 "flush": true, 00:16:28.343 "reset": true, 00:16:28.343 "nvme_admin": false, 00:16:28.343 "nvme_io": false, 00:16:28.343 "nvme_io_md": false, 00:16:28.343 "write_zeroes": true, 00:16:28.343 "zcopy": true, 00:16:28.343 "get_zone_info": false, 00:16:28.343 "zone_management": false, 00:16:28.343 "zone_append": false, 00:16:28.343 "compare": false, 00:16:28.343 "compare_and_write": false, 00:16:28.343 "abort": true, 00:16:28.343 "seek_hole": false, 00:16:28.343 "seek_data": false, 00:16:28.343 "copy": true, 00:16:28.343 "nvme_iov_md": false 00:16:28.343 }, 00:16:28.343 "memory_domains": [ 00:16:28.343 { 00:16:28.343 "dma_device_id": "system", 00:16:28.343 "dma_device_type": 1 00:16:28.343 }, 00:16:28.343 { 00:16:28.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.343 "dma_device_type": 2 00:16:28.343 } 00:16:28.343 ], 00:16:28.343 "driver_specific": {} 00:16:28.343 } 00:16:28.343 ] 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 BaseBdev3 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.343 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.343 [ 00:16:28.343 { 00:16:28.343 "name": "BaseBdev3", 00:16:28.343 "aliases": [ 00:16:28.343 "019890db-fa95-494b-803a-29ce2d0f6110" 00:16:28.343 ], 00:16:28.343 "product_name": "Malloc disk", 00:16:28.343 "block_size": 512, 00:16:28.343 "num_blocks": 65536, 00:16:28.343 "uuid": "019890db-fa95-494b-803a-29ce2d0f6110", 00:16:28.343 "assigned_rate_limits": { 00:16:28.343 "rw_ios_per_sec": 0, 00:16:28.343 "rw_mbytes_per_sec": 0, 00:16:28.343 "r_mbytes_per_sec": 0, 00:16:28.343 "w_mbytes_per_sec": 0 00:16:28.343 }, 00:16:28.343 "claimed": false, 00:16:28.343 "zoned": false, 00:16:28.343 "supported_io_types": { 00:16:28.343 "read": true, 00:16:28.343 "write": true, 00:16:28.343 "unmap": true, 00:16:28.343 "flush": true, 00:16:28.343 "reset": true, 00:16:28.343 "nvme_admin": false, 00:16:28.343 "nvme_io": false, 00:16:28.343 "nvme_io_md": false, 00:16:28.343 "write_zeroes": true, 00:16:28.343 "zcopy": true, 00:16:28.343 "get_zone_info": false, 00:16:28.343 "zone_management": false, 00:16:28.343 "zone_append": false, 00:16:28.343 "compare": false, 00:16:28.343 "compare_and_write": false, 00:16:28.343 "abort": true, 00:16:28.343 "seek_hole": false, 00:16:28.344 "seek_data": false, 00:16:28.344 "copy": true, 00:16:28.344 "nvme_iov_md": false 00:16:28.344 }, 00:16:28.344 "memory_domains": [ 00:16:28.344 { 00:16:28.344 "dma_device_id": "system", 00:16:28.344 "dma_device_type": 1 00:16:28.344 }, 00:16:28.344 { 00:16:28.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.344 "dma_device_type": 2 00:16:28.344 } 00:16:28.344 ], 00:16:28.344 "driver_specific": {} 00:16:28.344 } 00:16:28.344 ] 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.344 BaseBdev4 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.344 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.602 [ 00:16:28.602 { 00:16:28.602 "name": "BaseBdev4", 00:16:28.602 "aliases": [ 00:16:28.603 "661cc064-b5ba-48ec-8624-0bcfbe0eeffa" 00:16:28.603 ], 00:16:28.603 "product_name": "Malloc disk", 00:16:28.603 "block_size": 512, 00:16:28.603 "num_blocks": 65536, 00:16:28.603 "uuid": "661cc064-b5ba-48ec-8624-0bcfbe0eeffa", 00:16:28.603 "assigned_rate_limits": { 00:16:28.603 "rw_ios_per_sec": 0, 00:16:28.603 "rw_mbytes_per_sec": 0, 00:16:28.603 "r_mbytes_per_sec": 0, 00:16:28.603 "w_mbytes_per_sec": 0 00:16:28.603 }, 00:16:28.603 "claimed": false, 00:16:28.603 "zoned": false, 00:16:28.603 "supported_io_types": { 00:16:28.603 "read": true, 00:16:28.603 "write": true, 00:16:28.603 "unmap": true, 00:16:28.603 "flush": true, 00:16:28.603 "reset": true, 00:16:28.603 "nvme_admin": false, 00:16:28.603 "nvme_io": false, 00:16:28.603 "nvme_io_md": false, 00:16:28.603 "write_zeroes": true, 00:16:28.603 "zcopy": true, 00:16:28.603 "get_zone_info": false, 00:16:28.603 "zone_management": false, 00:16:28.603 "zone_append": false, 00:16:28.603 "compare": false, 00:16:28.603 "compare_and_write": false, 00:16:28.603 "abort": true, 00:16:28.603 "seek_hole": false, 00:16:28.603 "seek_data": false, 00:16:28.603 "copy": true, 00:16:28.603 "nvme_iov_md": false 00:16:28.603 }, 00:16:28.603 "memory_domains": [ 00:16:28.603 { 00:16:28.603 "dma_device_id": "system", 00:16:28.603 "dma_device_type": 1 00:16:28.603 }, 00:16:28.603 { 00:16:28.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.603 "dma_device_type": 2 00:16:28.603 } 00:16:28.603 ], 00:16:28.603 "driver_specific": {} 00:16:28.603 } 00:16:28.603 ] 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.603 [2024-11-06 09:09:04.902365] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.603 [2024-11-06 09:09:04.902570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.603 [2024-11-06 09:09:04.902752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:28.603 [2024-11-06 09:09:04.905340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:28.603 [2024-11-06 09:09:04.905532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.603 "name": "Existed_Raid", 00:16:28.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.603 "strip_size_kb": 64, 00:16:28.603 "state": "configuring", 00:16:28.603 "raid_level": "concat", 00:16:28.603 "superblock": false, 00:16:28.603 "num_base_bdevs": 4, 00:16:28.603 "num_base_bdevs_discovered": 3, 00:16:28.603 "num_base_bdevs_operational": 4, 00:16:28.603 "base_bdevs_list": [ 00:16:28.603 { 00:16:28.603 "name": "BaseBdev1", 00:16:28.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.603 "is_configured": false, 00:16:28.603 "data_offset": 0, 00:16:28.603 "data_size": 0 00:16:28.603 }, 00:16:28.603 { 00:16:28.603 "name": "BaseBdev2", 00:16:28.603 "uuid": "b08eb476-77b5-4ef6-895d-83e9df06ee84", 00:16:28.603 "is_configured": true, 00:16:28.603 "data_offset": 0, 00:16:28.603 "data_size": 65536 00:16:28.603 }, 00:16:28.603 { 00:16:28.603 "name": "BaseBdev3", 00:16:28.603 "uuid": "019890db-fa95-494b-803a-29ce2d0f6110", 00:16:28.603 "is_configured": true, 00:16:28.603 "data_offset": 0, 00:16:28.603 "data_size": 65536 00:16:28.603 }, 00:16:28.603 { 00:16:28.603 "name": "BaseBdev4", 00:16:28.603 "uuid": "661cc064-b5ba-48ec-8624-0bcfbe0eeffa", 00:16:28.603 "is_configured": true, 00:16:28.603 "data_offset": 0, 00:16:28.603 "data_size": 65536 00:16:28.603 } 00:16:28.603 ] 00:16:28.603 }' 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.603 09:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.170 [2024-11-06 09:09:05.470578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.170 "name": "Existed_Raid", 00:16:29.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.170 "strip_size_kb": 64, 00:16:29.170 "state": "configuring", 00:16:29.170 "raid_level": "concat", 00:16:29.170 "superblock": false, 00:16:29.170 "num_base_bdevs": 4, 00:16:29.170 "num_base_bdevs_discovered": 2, 00:16:29.170 "num_base_bdevs_operational": 4, 00:16:29.170 "base_bdevs_list": [ 00:16:29.170 { 00:16:29.170 "name": "BaseBdev1", 00:16:29.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.170 "is_configured": false, 00:16:29.170 "data_offset": 0, 00:16:29.170 "data_size": 0 00:16:29.170 }, 00:16:29.170 { 00:16:29.170 "name": null, 00:16:29.170 "uuid": "b08eb476-77b5-4ef6-895d-83e9df06ee84", 00:16:29.170 "is_configured": false, 00:16:29.170 "data_offset": 0, 00:16:29.170 "data_size": 65536 00:16:29.170 }, 00:16:29.170 { 00:16:29.170 "name": "BaseBdev3", 00:16:29.170 "uuid": "019890db-fa95-494b-803a-29ce2d0f6110", 00:16:29.170 "is_configured": true, 00:16:29.170 "data_offset": 0, 00:16:29.170 "data_size": 65536 00:16:29.170 }, 00:16:29.170 { 00:16:29.170 "name": "BaseBdev4", 00:16:29.170 "uuid": "661cc064-b5ba-48ec-8624-0bcfbe0eeffa", 00:16:29.170 "is_configured": true, 00:16:29.170 "data_offset": 0, 00:16:29.170 "data_size": 65536 00:16:29.170 } 00:16:29.170 ] 00:16:29.170 }' 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.170 09:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.738 [2024-11-06 09:09:06.121520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.738 BaseBdev1 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.738 [ 00:16:29.738 { 00:16:29.738 "name": "BaseBdev1", 00:16:29.738 "aliases": [ 00:16:29.738 "28274e72-263a-447a-b689-d834e5852256" 00:16:29.738 ], 00:16:29.738 "product_name": "Malloc disk", 00:16:29.738 "block_size": 512, 00:16:29.738 "num_blocks": 65536, 00:16:29.738 "uuid": "28274e72-263a-447a-b689-d834e5852256", 00:16:29.738 "assigned_rate_limits": { 00:16:29.738 "rw_ios_per_sec": 0, 00:16:29.738 "rw_mbytes_per_sec": 0, 00:16:29.738 "r_mbytes_per_sec": 0, 00:16:29.738 "w_mbytes_per_sec": 0 00:16:29.738 }, 00:16:29.738 "claimed": true, 00:16:29.738 "claim_type": "exclusive_write", 00:16:29.738 "zoned": false, 00:16:29.738 "supported_io_types": { 00:16:29.738 "read": true, 00:16:29.738 "write": true, 00:16:29.738 "unmap": true, 00:16:29.738 "flush": true, 00:16:29.738 "reset": true, 00:16:29.738 "nvme_admin": false, 00:16:29.738 "nvme_io": false, 00:16:29.738 "nvme_io_md": false, 00:16:29.738 "write_zeroes": true, 00:16:29.738 "zcopy": true, 00:16:29.738 "get_zone_info": false, 00:16:29.738 "zone_management": false, 00:16:29.738 "zone_append": false, 00:16:29.738 "compare": false, 00:16:29.738 "compare_and_write": false, 00:16:29.738 "abort": true, 00:16:29.738 "seek_hole": false, 00:16:29.738 "seek_data": false, 00:16:29.738 "copy": true, 00:16:29.738 "nvme_iov_md": false 00:16:29.738 }, 00:16:29.738 "memory_domains": [ 00:16:29.738 { 00:16:29.738 "dma_device_id": "system", 00:16:29.738 "dma_device_type": 1 00:16:29.738 }, 00:16:29.738 { 00:16:29.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.738 "dma_device_type": 2 00:16:29.738 } 00:16:29.738 ], 00:16:29.738 "driver_specific": {} 00:16:29.738 } 00:16:29.738 ] 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.738 "name": "Existed_Raid", 00:16:29.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.738 "strip_size_kb": 64, 00:16:29.738 "state": "configuring", 00:16:29.738 "raid_level": "concat", 00:16:29.738 "superblock": false, 00:16:29.738 "num_base_bdevs": 4, 00:16:29.738 "num_base_bdevs_discovered": 3, 00:16:29.738 "num_base_bdevs_operational": 4, 00:16:29.738 "base_bdevs_list": [ 00:16:29.738 { 00:16:29.738 "name": "BaseBdev1", 00:16:29.738 "uuid": "28274e72-263a-447a-b689-d834e5852256", 00:16:29.738 "is_configured": true, 00:16:29.738 "data_offset": 0, 00:16:29.738 "data_size": 65536 00:16:29.738 }, 00:16:29.738 { 00:16:29.738 "name": null, 00:16:29.738 "uuid": "b08eb476-77b5-4ef6-895d-83e9df06ee84", 00:16:29.738 "is_configured": false, 00:16:29.738 "data_offset": 0, 00:16:29.738 "data_size": 65536 00:16:29.738 }, 00:16:29.738 { 00:16:29.738 "name": "BaseBdev3", 00:16:29.738 "uuid": "019890db-fa95-494b-803a-29ce2d0f6110", 00:16:29.738 "is_configured": true, 00:16:29.738 "data_offset": 0, 00:16:29.738 "data_size": 65536 00:16:29.738 }, 00:16:29.738 { 00:16:29.738 "name": "BaseBdev4", 00:16:29.738 "uuid": "661cc064-b5ba-48ec-8624-0bcfbe0eeffa", 00:16:29.738 "is_configured": true, 00:16:29.738 "data_offset": 0, 00:16:29.738 "data_size": 65536 00:16:29.738 } 00:16:29.738 ] 00:16:29.738 }' 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.738 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.304 [2024-11-06 09:09:06.749786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.304 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.304 "name": "Existed_Raid", 00:16:30.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.304 "strip_size_kb": 64, 00:16:30.304 "state": "configuring", 00:16:30.304 "raid_level": "concat", 00:16:30.305 "superblock": false, 00:16:30.305 "num_base_bdevs": 4, 00:16:30.305 "num_base_bdevs_discovered": 2, 00:16:30.305 "num_base_bdevs_operational": 4, 00:16:30.305 "base_bdevs_list": [ 00:16:30.305 { 00:16:30.305 "name": "BaseBdev1", 00:16:30.305 "uuid": "28274e72-263a-447a-b689-d834e5852256", 00:16:30.305 "is_configured": true, 00:16:30.305 "data_offset": 0, 00:16:30.305 "data_size": 65536 00:16:30.305 }, 00:16:30.305 { 00:16:30.305 "name": null, 00:16:30.305 "uuid": "b08eb476-77b5-4ef6-895d-83e9df06ee84", 00:16:30.305 "is_configured": false, 00:16:30.305 "data_offset": 0, 00:16:30.305 "data_size": 65536 00:16:30.305 }, 00:16:30.305 { 00:16:30.305 "name": null, 00:16:30.305 "uuid": "019890db-fa95-494b-803a-29ce2d0f6110", 00:16:30.305 "is_configured": false, 00:16:30.305 "data_offset": 0, 00:16:30.305 "data_size": 65536 00:16:30.305 }, 00:16:30.305 { 00:16:30.305 "name": "BaseBdev4", 00:16:30.305 "uuid": "661cc064-b5ba-48ec-8624-0bcfbe0eeffa", 00:16:30.305 "is_configured": true, 00:16:30.305 "data_offset": 0, 00:16:30.305 "data_size": 65536 00:16:30.305 } 00:16:30.305 ] 00:16:30.305 }' 00:16:30.305 09:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.305 09:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.871 [2024-11-06 09:09:07.377991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.871 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.130 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.130 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.130 "name": "Existed_Raid", 00:16:31.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.130 "strip_size_kb": 64, 00:16:31.130 "state": "configuring", 00:16:31.130 "raid_level": "concat", 00:16:31.130 "superblock": false, 00:16:31.130 "num_base_bdevs": 4, 00:16:31.130 "num_base_bdevs_discovered": 3, 00:16:31.130 "num_base_bdevs_operational": 4, 00:16:31.130 "base_bdevs_list": [ 00:16:31.130 { 00:16:31.130 "name": "BaseBdev1", 00:16:31.130 "uuid": "28274e72-263a-447a-b689-d834e5852256", 00:16:31.130 "is_configured": true, 00:16:31.130 "data_offset": 0, 00:16:31.130 "data_size": 65536 00:16:31.130 }, 00:16:31.130 { 00:16:31.130 "name": null, 00:16:31.130 "uuid": "b08eb476-77b5-4ef6-895d-83e9df06ee84", 00:16:31.130 "is_configured": false, 00:16:31.130 "data_offset": 0, 00:16:31.130 "data_size": 65536 00:16:31.130 }, 00:16:31.130 { 00:16:31.130 "name": "BaseBdev3", 00:16:31.130 "uuid": "019890db-fa95-494b-803a-29ce2d0f6110", 00:16:31.130 "is_configured": true, 00:16:31.130 "data_offset": 0, 00:16:31.130 "data_size": 65536 00:16:31.130 }, 00:16:31.130 { 00:16:31.130 "name": "BaseBdev4", 00:16:31.130 "uuid": "661cc064-b5ba-48ec-8624-0bcfbe0eeffa", 00:16:31.130 "is_configured": true, 00:16:31.130 "data_offset": 0, 00:16:31.130 "data_size": 65536 00:16:31.130 } 00:16:31.130 ] 00:16:31.130 }' 00:16:31.130 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.130 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.697 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.697 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:31.697 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.697 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.697 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.697 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:31.697 09:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:31.697 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.697 09:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.697 [2024-11-06 09:09:08.002202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.697 "name": "Existed_Raid", 00:16:31.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.697 "strip_size_kb": 64, 00:16:31.697 "state": "configuring", 00:16:31.697 "raid_level": "concat", 00:16:31.697 "superblock": false, 00:16:31.697 "num_base_bdevs": 4, 00:16:31.697 "num_base_bdevs_discovered": 2, 00:16:31.697 "num_base_bdevs_operational": 4, 00:16:31.697 "base_bdevs_list": [ 00:16:31.697 { 00:16:31.697 "name": null, 00:16:31.697 "uuid": "28274e72-263a-447a-b689-d834e5852256", 00:16:31.697 "is_configured": false, 00:16:31.697 "data_offset": 0, 00:16:31.697 "data_size": 65536 00:16:31.697 }, 00:16:31.697 { 00:16:31.697 "name": null, 00:16:31.697 "uuid": "b08eb476-77b5-4ef6-895d-83e9df06ee84", 00:16:31.697 "is_configured": false, 00:16:31.697 "data_offset": 0, 00:16:31.697 "data_size": 65536 00:16:31.697 }, 00:16:31.697 { 00:16:31.697 "name": "BaseBdev3", 00:16:31.697 "uuid": "019890db-fa95-494b-803a-29ce2d0f6110", 00:16:31.697 "is_configured": true, 00:16:31.697 "data_offset": 0, 00:16:31.697 "data_size": 65536 00:16:31.697 }, 00:16:31.697 { 00:16:31.697 "name": "BaseBdev4", 00:16:31.697 "uuid": "661cc064-b5ba-48ec-8624-0bcfbe0eeffa", 00:16:31.697 "is_configured": true, 00:16:31.697 "data_offset": 0, 00:16:31.697 "data_size": 65536 00:16:31.697 } 00:16:31.697 ] 00:16:31.697 }' 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.697 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.263 [2024-11-06 09:09:08.609215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.263 "name": "Existed_Raid", 00:16:32.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.263 "strip_size_kb": 64, 00:16:32.263 "state": "configuring", 00:16:32.263 "raid_level": "concat", 00:16:32.263 "superblock": false, 00:16:32.263 "num_base_bdevs": 4, 00:16:32.263 "num_base_bdevs_discovered": 3, 00:16:32.263 "num_base_bdevs_operational": 4, 00:16:32.263 "base_bdevs_list": [ 00:16:32.263 { 00:16:32.263 "name": null, 00:16:32.263 "uuid": "28274e72-263a-447a-b689-d834e5852256", 00:16:32.263 "is_configured": false, 00:16:32.263 "data_offset": 0, 00:16:32.263 "data_size": 65536 00:16:32.263 }, 00:16:32.263 { 00:16:32.263 "name": "BaseBdev2", 00:16:32.263 "uuid": "b08eb476-77b5-4ef6-895d-83e9df06ee84", 00:16:32.263 "is_configured": true, 00:16:32.263 "data_offset": 0, 00:16:32.263 "data_size": 65536 00:16:32.263 }, 00:16:32.263 { 00:16:32.263 "name": "BaseBdev3", 00:16:32.263 "uuid": "019890db-fa95-494b-803a-29ce2d0f6110", 00:16:32.263 "is_configured": true, 00:16:32.263 "data_offset": 0, 00:16:32.263 "data_size": 65536 00:16:32.263 }, 00:16:32.263 { 00:16:32.263 "name": "BaseBdev4", 00:16:32.263 "uuid": "661cc064-b5ba-48ec-8624-0bcfbe0eeffa", 00:16:32.263 "is_configured": true, 00:16:32.263 "data_offset": 0, 00:16:32.263 "data_size": 65536 00:16:32.263 } 00:16:32.263 ] 00:16:32.263 }' 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.263 09:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 28274e72-263a-447a-b689-d834e5852256 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.830 [2024-11-06 09:09:09.271084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:32.830 [2024-11-06 09:09:09.271161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:32.830 [2024-11-06 09:09:09.271174] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:16:32.830 [2024-11-06 09:09:09.271507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:32.830 [2024-11-06 09:09:09.271693] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:32.830 [2024-11-06 09:09:09.271715] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:32.830 [2024-11-06 09:09:09.272044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.830 NewBaseBdev 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.830 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.830 [ 00:16:32.830 { 00:16:32.830 "name": "NewBaseBdev", 00:16:32.830 "aliases": [ 00:16:32.830 "28274e72-263a-447a-b689-d834e5852256" 00:16:32.830 ], 00:16:32.830 "product_name": "Malloc disk", 00:16:32.830 "block_size": 512, 00:16:32.830 "num_blocks": 65536, 00:16:32.830 "uuid": "28274e72-263a-447a-b689-d834e5852256", 00:16:32.830 "assigned_rate_limits": { 00:16:32.830 "rw_ios_per_sec": 0, 00:16:32.830 "rw_mbytes_per_sec": 0, 00:16:32.830 "r_mbytes_per_sec": 0, 00:16:32.830 "w_mbytes_per_sec": 0 00:16:32.830 }, 00:16:32.830 "claimed": true, 00:16:32.830 "claim_type": "exclusive_write", 00:16:32.830 "zoned": false, 00:16:32.830 "supported_io_types": { 00:16:32.830 "read": true, 00:16:32.830 "write": true, 00:16:32.830 "unmap": true, 00:16:32.830 "flush": true, 00:16:32.830 "reset": true, 00:16:32.830 "nvme_admin": false, 00:16:32.830 "nvme_io": false, 00:16:32.830 "nvme_io_md": false, 00:16:32.830 "write_zeroes": true, 00:16:32.830 "zcopy": true, 00:16:32.830 "get_zone_info": false, 00:16:32.830 "zone_management": false, 00:16:32.830 "zone_append": false, 00:16:32.830 "compare": false, 00:16:32.830 "compare_and_write": false, 00:16:32.830 "abort": true, 00:16:32.830 "seek_hole": false, 00:16:32.830 "seek_data": false, 00:16:32.830 "copy": true, 00:16:32.830 "nvme_iov_md": false 00:16:32.830 }, 00:16:32.830 "memory_domains": [ 00:16:32.830 { 00:16:32.830 "dma_device_id": "system", 00:16:32.830 "dma_device_type": 1 00:16:32.830 }, 00:16:32.830 { 00:16:32.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.830 "dma_device_type": 2 00:16:32.830 } 00:16:32.830 ], 00:16:32.831 "driver_specific": {} 00:16:32.831 } 00:16:32.831 ] 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.831 "name": "Existed_Raid", 00:16:32.831 "uuid": "a369d0a4-25d9-4499-a4d9-1a6634788f44", 00:16:32.831 "strip_size_kb": 64, 00:16:32.831 "state": "online", 00:16:32.831 "raid_level": "concat", 00:16:32.831 "superblock": false, 00:16:32.831 "num_base_bdevs": 4, 00:16:32.831 "num_base_bdevs_discovered": 4, 00:16:32.831 "num_base_bdevs_operational": 4, 00:16:32.831 "base_bdevs_list": [ 00:16:32.831 { 00:16:32.831 "name": "NewBaseBdev", 00:16:32.831 "uuid": "28274e72-263a-447a-b689-d834e5852256", 00:16:32.831 "is_configured": true, 00:16:32.831 "data_offset": 0, 00:16:32.831 "data_size": 65536 00:16:32.831 }, 00:16:32.831 { 00:16:32.831 "name": "BaseBdev2", 00:16:32.831 "uuid": "b08eb476-77b5-4ef6-895d-83e9df06ee84", 00:16:32.831 "is_configured": true, 00:16:32.831 "data_offset": 0, 00:16:32.831 "data_size": 65536 00:16:32.831 }, 00:16:32.831 { 00:16:32.831 "name": "BaseBdev3", 00:16:32.831 "uuid": "019890db-fa95-494b-803a-29ce2d0f6110", 00:16:32.831 "is_configured": true, 00:16:32.831 "data_offset": 0, 00:16:32.831 "data_size": 65536 00:16:32.831 }, 00:16:32.831 { 00:16:32.831 "name": "BaseBdev4", 00:16:32.831 "uuid": "661cc064-b5ba-48ec-8624-0bcfbe0eeffa", 00:16:32.831 "is_configured": true, 00:16:32.831 "data_offset": 0, 00:16:32.831 "data_size": 65536 00:16:32.831 } 00:16:32.831 ] 00:16:32.831 }' 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.831 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:33.395 [2024-11-06 09:09:09.819741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:33.395 "name": "Existed_Raid", 00:16:33.395 "aliases": [ 00:16:33.395 "a369d0a4-25d9-4499-a4d9-1a6634788f44" 00:16:33.395 ], 00:16:33.395 "product_name": "Raid Volume", 00:16:33.395 "block_size": 512, 00:16:33.395 "num_blocks": 262144, 00:16:33.395 "uuid": "a369d0a4-25d9-4499-a4d9-1a6634788f44", 00:16:33.395 "assigned_rate_limits": { 00:16:33.395 "rw_ios_per_sec": 0, 00:16:33.395 "rw_mbytes_per_sec": 0, 00:16:33.395 "r_mbytes_per_sec": 0, 00:16:33.395 "w_mbytes_per_sec": 0 00:16:33.395 }, 00:16:33.395 "claimed": false, 00:16:33.395 "zoned": false, 00:16:33.395 "supported_io_types": { 00:16:33.395 "read": true, 00:16:33.395 "write": true, 00:16:33.395 "unmap": true, 00:16:33.395 "flush": true, 00:16:33.395 "reset": true, 00:16:33.395 "nvme_admin": false, 00:16:33.395 "nvme_io": false, 00:16:33.395 "nvme_io_md": false, 00:16:33.395 "write_zeroes": true, 00:16:33.395 "zcopy": false, 00:16:33.395 "get_zone_info": false, 00:16:33.395 "zone_management": false, 00:16:33.395 "zone_append": false, 00:16:33.395 "compare": false, 00:16:33.395 "compare_and_write": false, 00:16:33.395 "abort": false, 00:16:33.395 "seek_hole": false, 00:16:33.395 "seek_data": false, 00:16:33.395 "copy": false, 00:16:33.395 "nvme_iov_md": false 00:16:33.395 }, 00:16:33.395 "memory_domains": [ 00:16:33.395 { 00:16:33.395 "dma_device_id": "system", 00:16:33.395 "dma_device_type": 1 00:16:33.395 }, 00:16:33.395 { 00:16:33.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.395 "dma_device_type": 2 00:16:33.395 }, 00:16:33.395 { 00:16:33.395 "dma_device_id": "system", 00:16:33.395 "dma_device_type": 1 00:16:33.395 }, 00:16:33.395 { 00:16:33.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.395 "dma_device_type": 2 00:16:33.395 }, 00:16:33.395 { 00:16:33.395 "dma_device_id": "system", 00:16:33.395 "dma_device_type": 1 00:16:33.395 }, 00:16:33.395 { 00:16:33.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.395 "dma_device_type": 2 00:16:33.395 }, 00:16:33.395 { 00:16:33.395 "dma_device_id": "system", 00:16:33.395 "dma_device_type": 1 00:16:33.395 }, 00:16:33.395 { 00:16:33.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.395 "dma_device_type": 2 00:16:33.395 } 00:16:33.395 ], 00:16:33.395 "driver_specific": { 00:16:33.395 "raid": { 00:16:33.395 "uuid": "a369d0a4-25d9-4499-a4d9-1a6634788f44", 00:16:33.395 "strip_size_kb": 64, 00:16:33.395 "state": "online", 00:16:33.395 "raid_level": "concat", 00:16:33.395 "superblock": false, 00:16:33.395 "num_base_bdevs": 4, 00:16:33.395 "num_base_bdevs_discovered": 4, 00:16:33.395 "num_base_bdevs_operational": 4, 00:16:33.395 "base_bdevs_list": [ 00:16:33.395 { 00:16:33.395 "name": "NewBaseBdev", 00:16:33.395 "uuid": "28274e72-263a-447a-b689-d834e5852256", 00:16:33.395 "is_configured": true, 00:16:33.395 "data_offset": 0, 00:16:33.395 "data_size": 65536 00:16:33.395 }, 00:16:33.395 { 00:16:33.395 "name": "BaseBdev2", 00:16:33.395 "uuid": "b08eb476-77b5-4ef6-895d-83e9df06ee84", 00:16:33.395 "is_configured": true, 00:16:33.395 "data_offset": 0, 00:16:33.395 "data_size": 65536 00:16:33.395 }, 00:16:33.395 { 00:16:33.395 "name": "BaseBdev3", 00:16:33.395 "uuid": "019890db-fa95-494b-803a-29ce2d0f6110", 00:16:33.395 "is_configured": true, 00:16:33.395 "data_offset": 0, 00:16:33.395 "data_size": 65536 00:16:33.395 }, 00:16:33.395 { 00:16:33.395 "name": "BaseBdev4", 00:16:33.395 "uuid": "661cc064-b5ba-48ec-8624-0bcfbe0eeffa", 00:16:33.395 "is_configured": true, 00:16:33.395 "data_offset": 0, 00:16:33.395 "data_size": 65536 00:16:33.395 } 00:16:33.395 ] 00:16:33.395 } 00:16:33.395 } 00:16:33.395 }' 00:16:33.395 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:33.396 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:33.396 BaseBdev2 00:16:33.396 BaseBdev3 00:16:33.396 BaseBdev4' 00:16:33.396 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.653 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:33.653 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.653 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:33.653 09:09:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.653 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.653 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.653 09:09:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.653 [2024-11-06 09:09:10.159413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.653 [2024-11-06 09:09:10.159451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.653 [2024-11-06 09:09:10.159553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.653 [2024-11-06 09:09:10.159649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.653 [2024-11-06 09:09:10.159667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71327 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 71327 ']' 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 71327 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:33.653 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71327 00:16:33.911 killing process with pid 71327 00:16:33.911 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:33.911 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:33.911 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71327' 00:16:33.911 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 71327 00:16:33.911 [2024-11-06 09:09:10.198401] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.911 09:09:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 71327 00:16:34.169 [2024-11-06 09:09:10.559847] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.117 09:09:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:35.117 00:16:35.117 real 0m12.775s 00:16:35.117 user 0m21.133s 00:16:35.117 sys 0m1.801s 00:16:35.118 09:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:35.118 09:09:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.118 ************************************ 00:16:35.118 END TEST raid_state_function_test 00:16:35.118 ************************************ 00:16:35.118 09:09:11 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:16:35.118 09:09:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:35.118 09:09:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:35.118 09:09:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.378 ************************************ 00:16:35.378 START TEST raid_state_function_test_sb 00:16:35.378 ************************************ 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:35.378 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:35.379 Process raid pid: 72006 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72006 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72006' 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72006 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 72006 ']' 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:35.379 09:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.379 [2024-11-06 09:09:11.793238] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:16:35.379 [2024-11-06 09:09:11.793427] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.638 [2024-11-06 09:09:11.985973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.638 [2024-11-06 09:09:12.141848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.896 [2024-11-06 09:09:12.361747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.896 [2024-11-06 09:09:12.361970] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.462 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:36.462 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:36.462 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:36.462 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.462 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.462 [2024-11-06 09:09:12.782686] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.462 [2024-11-06 09:09:12.782761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.462 [2024-11-06 09:09:12.782781] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.462 [2024-11-06 09:09:12.782799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.462 [2024-11-06 09:09:12.782809] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.462 [2024-11-06 09:09:12.782838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.462 [2024-11-06 09:09:12.782849] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:36.462 [2024-11-06 09:09:12.782874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:36.462 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.462 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:36.462 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.462 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.462 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:36.462 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.462 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:36.463 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.463 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.463 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.463 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.463 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.463 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.463 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.463 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.463 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.463 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.463 "name": "Existed_Raid", 00:16:36.463 "uuid": "84d826b8-6d9d-4776-b545-f19713314f75", 00:16:36.463 "strip_size_kb": 64, 00:16:36.463 "state": "configuring", 00:16:36.463 "raid_level": "concat", 00:16:36.463 "superblock": true, 00:16:36.463 "num_base_bdevs": 4, 00:16:36.463 "num_base_bdevs_discovered": 0, 00:16:36.463 "num_base_bdevs_operational": 4, 00:16:36.463 "base_bdevs_list": [ 00:16:36.463 { 00:16:36.463 "name": "BaseBdev1", 00:16:36.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.463 "is_configured": false, 00:16:36.463 "data_offset": 0, 00:16:36.463 "data_size": 0 00:16:36.463 }, 00:16:36.463 { 00:16:36.463 "name": "BaseBdev2", 00:16:36.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.463 "is_configured": false, 00:16:36.463 "data_offset": 0, 00:16:36.463 "data_size": 0 00:16:36.463 }, 00:16:36.463 { 00:16:36.463 "name": "BaseBdev3", 00:16:36.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.463 "is_configured": false, 00:16:36.463 "data_offset": 0, 00:16:36.463 "data_size": 0 00:16:36.463 }, 00:16:36.463 { 00:16:36.463 "name": "BaseBdev4", 00:16:36.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.463 "is_configured": false, 00:16:36.463 "data_offset": 0, 00:16:36.463 "data_size": 0 00:16:36.463 } 00:16:36.463 ] 00:16:36.463 }' 00:16:36.463 09:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.463 09:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.028 [2024-11-06 09:09:13.262790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.028 [2024-11-06 09:09:13.262858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.028 [2024-11-06 09:09:13.270804] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:37.028 [2024-11-06 09:09:13.270878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:37.028 [2024-11-06 09:09:13.270895] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.028 [2024-11-06 09:09:13.270911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.028 [2024-11-06 09:09:13.270921] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:37.028 [2024-11-06 09:09:13.270935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:37.028 [2024-11-06 09:09:13.270945] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:37.028 [2024-11-06 09:09:13.270959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.028 [2024-11-06 09:09:13.319794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.028 BaseBdev1 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.028 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.028 [ 00:16:37.028 { 00:16:37.028 "name": "BaseBdev1", 00:16:37.028 "aliases": [ 00:16:37.028 "751527ba-73ff-48f1-960a-9eb6e117477f" 00:16:37.028 ], 00:16:37.028 "product_name": "Malloc disk", 00:16:37.028 "block_size": 512, 00:16:37.028 "num_blocks": 65536, 00:16:37.028 "uuid": "751527ba-73ff-48f1-960a-9eb6e117477f", 00:16:37.028 "assigned_rate_limits": { 00:16:37.028 "rw_ios_per_sec": 0, 00:16:37.028 "rw_mbytes_per_sec": 0, 00:16:37.028 "r_mbytes_per_sec": 0, 00:16:37.028 "w_mbytes_per_sec": 0 00:16:37.028 }, 00:16:37.028 "claimed": true, 00:16:37.028 "claim_type": "exclusive_write", 00:16:37.028 "zoned": false, 00:16:37.028 "supported_io_types": { 00:16:37.028 "read": true, 00:16:37.028 "write": true, 00:16:37.028 "unmap": true, 00:16:37.028 "flush": true, 00:16:37.028 "reset": true, 00:16:37.028 "nvme_admin": false, 00:16:37.028 "nvme_io": false, 00:16:37.028 "nvme_io_md": false, 00:16:37.028 "write_zeroes": true, 00:16:37.028 "zcopy": true, 00:16:37.028 "get_zone_info": false, 00:16:37.028 "zone_management": false, 00:16:37.028 "zone_append": false, 00:16:37.028 "compare": false, 00:16:37.028 "compare_and_write": false, 00:16:37.028 "abort": true, 00:16:37.028 "seek_hole": false, 00:16:37.028 "seek_data": false, 00:16:37.028 "copy": true, 00:16:37.028 "nvme_iov_md": false 00:16:37.028 }, 00:16:37.028 "memory_domains": [ 00:16:37.028 { 00:16:37.028 "dma_device_id": "system", 00:16:37.028 "dma_device_type": 1 00:16:37.028 }, 00:16:37.028 { 00:16:37.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.028 "dma_device_type": 2 00:16:37.028 } 00:16:37.028 ], 00:16:37.028 "driver_specific": {} 00:16:37.028 } 00:16:37.028 ] 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.029 "name": "Existed_Raid", 00:16:37.029 "uuid": "d8145396-4724-4cf8-896e-fdb0a70741d6", 00:16:37.029 "strip_size_kb": 64, 00:16:37.029 "state": "configuring", 00:16:37.029 "raid_level": "concat", 00:16:37.029 "superblock": true, 00:16:37.029 "num_base_bdevs": 4, 00:16:37.029 "num_base_bdevs_discovered": 1, 00:16:37.029 "num_base_bdevs_operational": 4, 00:16:37.029 "base_bdevs_list": [ 00:16:37.029 { 00:16:37.029 "name": "BaseBdev1", 00:16:37.029 "uuid": "751527ba-73ff-48f1-960a-9eb6e117477f", 00:16:37.029 "is_configured": true, 00:16:37.029 "data_offset": 2048, 00:16:37.029 "data_size": 63488 00:16:37.029 }, 00:16:37.029 { 00:16:37.029 "name": "BaseBdev2", 00:16:37.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.029 "is_configured": false, 00:16:37.029 "data_offset": 0, 00:16:37.029 "data_size": 0 00:16:37.029 }, 00:16:37.029 { 00:16:37.029 "name": "BaseBdev3", 00:16:37.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.029 "is_configured": false, 00:16:37.029 "data_offset": 0, 00:16:37.029 "data_size": 0 00:16:37.029 }, 00:16:37.029 { 00:16:37.029 "name": "BaseBdev4", 00:16:37.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.029 "is_configured": false, 00:16:37.029 "data_offset": 0, 00:16:37.029 "data_size": 0 00:16:37.029 } 00:16:37.029 ] 00:16:37.029 }' 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.029 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.595 [2024-11-06 09:09:13.884072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.595 [2024-11-06 09:09:13.884157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.595 [2024-11-06 09:09:13.896186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.595 [2024-11-06 09:09:13.898791] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.595 [2024-11-06 09:09:13.898876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.595 [2024-11-06 09:09:13.898894] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:37.595 [2024-11-06 09:09:13.898912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:37.595 [2024-11-06 09:09:13.898924] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:37.595 [2024-11-06 09:09:13.898938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.595 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.596 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.596 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.596 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.596 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.596 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.596 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.596 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.596 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.596 "name": "Existed_Raid", 00:16:37.596 "uuid": "9e18f0ba-5329-4e70-b680-7a85882114d0", 00:16:37.596 "strip_size_kb": 64, 00:16:37.596 "state": "configuring", 00:16:37.596 "raid_level": "concat", 00:16:37.596 "superblock": true, 00:16:37.596 "num_base_bdevs": 4, 00:16:37.596 "num_base_bdevs_discovered": 1, 00:16:37.596 "num_base_bdevs_operational": 4, 00:16:37.596 "base_bdevs_list": [ 00:16:37.596 { 00:16:37.596 "name": "BaseBdev1", 00:16:37.596 "uuid": "751527ba-73ff-48f1-960a-9eb6e117477f", 00:16:37.596 "is_configured": true, 00:16:37.596 "data_offset": 2048, 00:16:37.596 "data_size": 63488 00:16:37.596 }, 00:16:37.596 { 00:16:37.596 "name": "BaseBdev2", 00:16:37.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.596 "is_configured": false, 00:16:37.596 "data_offset": 0, 00:16:37.596 "data_size": 0 00:16:37.596 }, 00:16:37.596 { 00:16:37.596 "name": "BaseBdev3", 00:16:37.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.596 "is_configured": false, 00:16:37.596 "data_offset": 0, 00:16:37.596 "data_size": 0 00:16:37.596 }, 00:16:37.596 { 00:16:37.596 "name": "BaseBdev4", 00:16:37.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.596 "is_configured": false, 00:16:37.596 "data_offset": 0, 00:16:37.596 "data_size": 0 00:16:37.596 } 00:16:37.596 ] 00:16:37.596 }' 00:16:37.596 09:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.596 09:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.162 [2024-11-06 09:09:14.486890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.162 BaseBdev2 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.162 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.162 [ 00:16:38.162 { 00:16:38.162 "name": "BaseBdev2", 00:16:38.162 "aliases": [ 00:16:38.162 "6fad1b5d-8beb-41ad-8435-2c7eceea748f" 00:16:38.162 ], 00:16:38.162 "product_name": "Malloc disk", 00:16:38.162 "block_size": 512, 00:16:38.162 "num_blocks": 65536, 00:16:38.162 "uuid": "6fad1b5d-8beb-41ad-8435-2c7eceea748f", 00:16:38.162 "assigned_rate_limits": { 00:16:38.162 "rw_ios_per_sec": 0, 00:16:38.162 "rw_mbytes_per_sec": 0, 00:16:38.162 "r_mbytes_per_sec": 0, 00:16:38.162 "w_mbytes_per_sec": 0 00:16:38.162 }, 00:16:38.162 "claimed": true, 00:16:38.162 "claim_type": "exclusive_write", 00:16:38.162 "zoned": false, 00:16:38.162 "supported_io_types": { 00:16:38.162 "read": true, 00:16:38.162 "write": true, 00:16:38.163 "unmap": true, 00:16:38.163 "flush": true, 00:16:38.163 "reset": true, 00:16:38.163 "nvme_admin": false, 00:16:38.163 "nvme_io": false, 00:16:38.163 "nvme_io_md": false, 00:16:38.163 "write_zeroes": true, 00:16:38.163 "zcopy": true, 00:16:38.163 "get_zone_info": false, 00:16:38.163 "zone_management": false, 00:16:38.163 "zone_append": false, 00:16:38.163 "compare": false, 00:16:38.163 "compare_and_write": false, 00:16:38.163 "abort": true, 00:16:38.163 "seek_hole": false, 00:16:38.163 "seek_data": false, 00:16:38.163 "copy": true, 00:16:38.163 "nvme_iov_md": false 00:16:38.163 }, 00:16:38.163 "memory_domains": [ 00:16:38.163 { 00:16:38.163 "dma_device_id": "system", 00:16:38.163 "dma_device_type": 1 00:16:38.163 }, 00:16:38.163 { 00:16:38.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.163 "dma_device_type": 2 00:16:38.163 } 00:16:38.163 ], 00:16:38.163 "driver_specific": {} 00:16:38.163 } 00:16:38.163 ] 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.163 "name": "Existed_Raid", 00:16:38.163 "uuid": "9e18f0ba-5329-4e70-b680-7a85882114d0", 00:16:38.163 "strip_size_kb": 64, 00:16:38.163 "state": "configuring", 00:16:38.163 "raid_level": "concat", 00:16:38.163 "superblock": true, 00:16:38.163 "num_base_bdevs": 4, 00:16:38.163 "num_base_bdevs_discovered": 2, 00:16:38.163 "num_base_bdevs_operational": 4, 00:16:38.163 "base_bdevs_list": [ 00:16:38.163 { 00:16:38.163 "name": "BaseBdev1", 00:16:38.163 "uuid": "751527ba-73ff-48f1-960a-9eb6e117477f", 00:16:38.163 "is_configured": true, 00:16:38.163 "data_offset": 2048, 00:16:38.163 "data_size": 63488 00:16:38.163 }, 00:16:38.163 { 00:16:38.163 "name": "BaseBdev2", 00:16:38.163 "uuid": "6fad1b5d-8beb-41ad-8435-2c7eceea748f", 00:16:38.163 "is_configured": true, 00:16:38.163 "data_offset": 2048, 00:16:38.163 "data_size": 63488 00:16:38.163 }, 00:16:38.163 { 00:16:38.163 "name": "BaseBdev3", 00:16:38.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.163 "is_configured": false, 00:16:38.163 "data_offset": 0, 00:16:38.163 "data_size": 0 00:16:38.163 }, 00:16:38.163 { 00:16:38.163 "name": "BaseBdev4", 00:16:38.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.163 "is_configured": false, 00:16:38.163 "data_offset": 0, 00:16:38.163 "data_size": 0 00:16:38.163 } 00:16:38.163 ] 00:16:38.163 }' 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.163 09:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.730 [2024-11-06 09:09:15.087295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.730 BaseBdev3 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.730 [ 00:16:38.730 { 00:16:38.730 "name": "BaseBdev3", 00:16:38.730 "aliases": [ 00:16:38.730 "7a31a5d7-add0-41e2-9e6c-010d2d7a5d97" 00:16:38.730 ], 00:16:38.730 "product_name": "Malloc disk", 00:16:38.730 "block_size": 512, 00:16:38.730 "num_blocks": 65536, 00:16:38.730 "uuid": "7a31a5d7-add0-41e2-9e6c-010d2d7a5d97", 00:16:38.730 "assigned_rate_limits": { 00:16:38.730 "rw_ios_per_sec": 0, 00:16:38.730 "rw_mbytes_per_sec": 0, 00:16:38.730 "r_mbytes_per_sec": 0, 00:16:38.730 "w_mbytes_per_sec": 0 00:16:38.730 }, 00:16:38.730 "claimed": true, 00:16:38.730 "claim_type": "exclusive_write", 00:16:38.730 "zoned": false, 00:16:38.730 "supported_io_types": { 00:16:38.730 "read": true, 00:16:38.730 "write": true, 00:16:38.730 "unmap": true, 00:16:38.730 "flush": true, 00:16:38.730 "reset": true, 00:16:38.730 "nvme_admin": false, 00:16:38.730 "nvme_io": false, 00:16:38.730 "nvme_io_md": false, 00:16:38.730 "write_zeroes": true, 00:16:38.730 "zcopy": true, 00:16:38.730 "get_zone_info": false, 00:16:38.730 "zone_management": false, 00:16:38.730 "zone_append": false, 00:16:38.730 "compare": false, 00:16:38.730 "compare_and_write": false, 00:16:38.730 "abort": true, 00:16:38.730 "seek_hole": false, 00:16:38.730 "seek_data": false, 00:16:38.730 "copy": true, 00:16:38.730 "nvme_iov_md": false 00:16:38.730 }, 00:16:38.730 "memory_domains": [ 00:16:38.730 { 00:16:38.730 "dma_device_id": "system", 00:16:38.730 "dma_device_type": 1 00:16:38.730 }, 00:16:38.730 { 00:16:38.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.730 "dma_device_type": 2 00:16:38.730 } 00:16:38.730 ], 00:16:38.730 "driver_specific": {} 00:16:38.730 } 00:16:38.730 ] 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.730 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.730 "name": "Existed_Raid", 00:16:38.730 "uuid": "9e18f0ba-5329-4e70-b680-7a85882114d0", 00:16:38.730 "strip_size_kb": 64, 00:16:38.730 "state": "configuring", 00:16:38.730 "raid_level": "concat", 00:16:38.730 "superblock": true, 00:16:38.730 "num_base_bdevs": 4, 00:16:38.730 "num_base_bdevs_discovered": 3, 00:16:38.730 "num_base_bdevs_operational": 4, 00:16:38.730 "base_bdevs_list": [ 00:16:38.730 { 00:16:38.730 "name": "BaseBdev1", 00:16:38.730 "uuid": "751527ba-73ff-48f1-960a-9eb6e117477f", 00:16:38.730 "is_configured": true, 00:16:38.730 "data_offset": 2048, 00:16:38.730 "data_size": 63488 00:16:38.730 }, 00:16:38.730 { 00:16:38.730 "name": "BaseBdev2", 00:16:38.730 "uuid": "6fad1b5d-8beb-41ad-8435-2c7eceea748f", 00:16:38.730 "is_configured": true, 00:16:38.730 "data_offset": 2048, 00:16:38.730 "data_size": 63488 00:16:38.730 }, 00:16:38.730 { 00:16:38.730 "name": "BaseBdev3", 00:16:38.730 "uuid": "7a31a5d7-add0-41e2-9e6c-010d2d7a5d97", 00:16:38.730 "is_configured": true, 00:16:38.730 "data_offset": 2048, 00:16:38.730 "data_size": 63488 00:16:38.730 }, 00:16:38.731 { 00:16:38.731 "name": "BaseBdev4", 00:16:38.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.731 "is_configured": false, 00:16:38.731 "data_offset": 0, 00:16:38.731 "data_size": 0 00:16:38.731 } 00:16:38.731 ] 00:16:38.731 }' 00:16:38.731 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.731 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.296 [2024-11-06 09:09:15.609866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:39.296 [2024-11-06 09:09:15.610459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:39.296 [2024-11-06 09:09:15.610485] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:39.296 [2024-11-06 09:09:15.610860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:39.296 BaseBdev4 00:16:39.296 [2024-11-06 09:09:15.611057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:39.296 [2024-11-06 09:09:15.611082] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:39.296 [2024-11-06 09:09:15.611255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.296 [ 00:16:39.296 { 00:16:39.296 "name": "BaseBdev4", 00:16:39.296 "aliases": [ 00:16:39.296 "52fae810-d729-4de2-942a-421e149bd9c4" 00:16:39.296 ], 00:16:39.296 "product_name": "Malloc disk", 00:16:39.296 "block_size": 512, 00:16:39.296 "num_blocks": 65536, 00:16:39.296 "uuid": "52fae810-d729-4de2-942a-421e149bd9c4", 00:16:39.296 "assigned_rate_limits": { 00:16:39.296 "rw_ios_per_sec": 0, 00:16:39.296 "rw_mbytes_per_sec": 0, 00:16:39.296 "r_mbytes_per_sec": 0, 00:16:39.296 "w_mbytes_per_sec": 0 00:16:39.296 }, 00:16:39.296 "claimed": true, 00:16:39.296 "claim_type": "exclusive_write", 00:16:39.296 "zoned": false, 00:16:39.296 "supported_io_types": { 00:16:39.296 "read": true, 00:16:39.296 "write": true, 00:16:39.296 "unmap": true, 00:16:39.296 "flush": true, 00:16:39.296 "reset": true, 00:16:39.296 "nvme_admin": false, 00:16:39.296 "nvme_io": false, 00:16:39.296 "nvme_io_md": false, 00:16:39.296 "write_zeroes": true, 00:16:39.296 "zcopy": true, 00:16:39.296 "get_zone_info": false, 00:16:39.296 "zone_management": false, 00:16:39.296 "zone_append": false, 00:16:39.296 "compare": false, 00:16:39.296 "compare_and_write": false, 00:16:39.296 "abort": true, 00:16:39.296 "seek_hole": false, 00:16:39.296 "seek_data": false, 00:16:39.296 "copy": true, 00:16:39.296 "nvme_iov_md": false 00:16:39.296 }, 00:16:39.296 "memory_domains": [ 00:16:39.296 { 00:16:39.296 "dma_device_id": "system", 00:16:39.296 "dma_device_type": 1 00:16:39.296 }, 00:16:39.296 { 00:16:39.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.296 "dma_device_type": 2 00:16:39.296 } 00:16:39.296 ], 00:16:39.296 "driver_specific": {} 00:16:39.296 } 00:16:39.296 ] 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.296 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.296 "name": "Existed_Raid", 00:16:39.296 "uuid": "9e18f0ba-5329-4e70-b680-7a85882114d0", 00:16:39.296 "strip_size_kb": 64, 00:16:39.296 "state": "online", 00:16:39.296 "raid_level": "concat", 00:16:39.296 "superblock": true, 00:16:39.296 "num_base_bdevs": 4, 00:16:39.296 "num_base_bdevs_discovered": 4, 00:16:39.296 "num_base_bdevs_operational": 4, 00:16:39.296 "base_bdevs_list": [ 00:16:39.296 { 00:16:39.296 "name": "BaseBdev1", 00:16:39.296 "uuid": "751527ba-73ff-48f1-960a-9eb6e117477f", 00:16:39.296 "is_configured": true, 00:16:39.296 "data_offset": 2048, 00:16:39.296 "data_size": 63488 00:16:39.296 }, 00:16:39.296 { 00:16:39.296 "name": "BaseBdev2", 00:16:39.296 "uuid": "6fad1b5d-8beb-41ad-8435-2c7eceea748f", 00:16:39.296 "is_configured": true, 00:16:39.296 "data_offset": 2048, 00:16:39.296 "data_size": 63488 00:16:39.296 }, 00:16:39.296 { 00:16:39.296 "name": "BaseBdev3", 00:16:39.296 "uuid": "7a31a5d7-add0-41e2-9e6c-010d2d7a5d97", 00:16:39.296 "is_configured": true, 00:16:39.296 "data_offset": 2048, 00:16:39.296 "data_size": 63488 00:16:39.296 }, 00:16:39.296 { 00:16:39.296 "name": "BaseBdev4", 00:16:39.296 "uuid": "52fae810-d729-4de2-942a-421e149bd9c4", 00:16:39.296 "is_configured": true, 00:16:39.296 "data_offset": 2048, 00:16:39.296 "data_size": 63488 00:16:39.296 } 00:16:39.296 ] 00:16:39.296 }' 00:16:39.297 09:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.297 09:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.555 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:39.555 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:39.555 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:39.555 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:39.555 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:39.555 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:39.555 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:39.555 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:39.555 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.555 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.555 [2024-11-06 09:09:16.078598] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.813 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.813 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:39.813 "name": "Existed_Raid", 00:16:39.813 "aliases": [ 00:16:39.813 "9e18f0ba-5329-4e70-b680-7a85882114d0" 00:16:39.813 ], 00:16:39.813 "product_name": "Raid Volume", 00:16:39.813 "block_size": 512, 00:16:39.813 "num_blocks": 253952, 00:16:39.813 "uuid": "9e18f0ba-5329-4e70-b680-7a85882114d0", 00:16:39.813 "assigned_rate_limits": { 00:16:39.813 "rw_ios_per_sec": 0, 00:16:39.813 "rw_mbytes_per_sec": 0, 00:16:39.813 "r_mbytes_per_sec": 0, 00:16:39.813 "w_mbytes_per_sec": 0 00:16:39.813 }, 00:16:39.813 "claimed": false, 00:16:39.813 "zoned": false, 00:16:39.813 "supported_io_types": { 00:16:39.813 "read": true, 00:16:39.813 "write": true, 00:16:39.813 "unmap": true, 00:16:39.813 "flush": true, 00:16:39.813 "reset": true, 00:16:39.813 "nvme_admin": false, 00:16:39.813 "nvme_io": false, 00:16:39.813 "nvme_io_md": false, 00:16:39.813 "write_zeroes": true, 00:16:39.813 "zcopy": false, 00:16:39.813 "get_zone_info": false, 00:16:39.813 "zone_management": false, 00:16:39.813 "zone_append": false, 00:16:39.813 "compare": false, 00:16:39.813 "compare_and_write": false, 00:16:39.813 "abort": false, 00:16:39.813 "seek_hole": false, 00:16:39.813 "seek_data": false, 00:16:39.813 "copy": false, 00:16:39.813 "nvme_iov_md": false 00:16:39.813 }, 00:16:39.813 "memory_domains": [ 00:16:39.813 { 00:16:39.813 "dma_device_id": "system", 00:16:39.813 "dma_device_type": 1 00:16:39.813 }, 00:16:39.813 { 00:16:39.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.813 "dma_device_type": 2 00:16:39.813 }, 00:16:39.813 { 00:16:39.813 "dma_device_id": "system", 00:16:39.813 "dma_device_type": 1 00:16:39.813 }, 00:16:39.813 { 00:16:39.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.813 "dma_device_type": 2 00:16:39.813 }, 00:16:39.813 { 00:16:39.813 "dma_device_id": "system", 00:16:39.813 "dma_device_type": 1 00:16:39.813 }, 00:16:39.813 { 00:16:39.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.813 "dma_device_type": 2 00:16:39.813 }, 00:16:39.813 { 00:16:39.813 "dma_device_id": "system", 00:16:39.813 "dma_device_type": 1 00:16:39.813 }, 00:16:39.813 { 00:16:39.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.813 "dma_device_type": 2 00:16:39.813 } 00:16:39.813 ], 00:16:39.813 "driver_specific": { 00:16:39.813 "raid": { 00:16:39.813 "uuid": "9e18f0ba-5329-4e70-b680-7a85882114d0", 00:16:39.813 "strip_size_kb": 64, 00:16:39.813 "state": "online", 00:16:39.813 "raid_level": "concat", 00:16:39.814 "superblock": true, 00:16:39.814 "num_base_bdevs": 4, 00:16:39.814 "num_base_bdevs_discovered": 4, 00:16:39.814 "num_base_bdevs_operational": 4, 00:16:39.814 "base_bdevs_list": [ 00:16:39.814 { 00:16:39.814 "name": "BaseBdev1", 00:16:39.814 "uuid": "751527ba-73ff-48f1-960a-9eb6e117477f", 00:16:39.814 "is_configured": true, 00:16:39.814 "data_offset": 2048, 00:16:39.814 "data_size": 63488 00:16:39.814 }, 00:16:39.814 { 00:16:39.814 "name": "BaseBdev2", 00:16:39.814 "uuid": "6fad1b5d-8beb-41ad-8435-2c7eceea748f", 00:16:39.814 "is_configured": true, 00:16:39.814 "data_offset": 2048, 00:16:39.814 "data_size": 63488 00:16:39.814 }, 00:16:39.814 { 00:16:39.814 "name": "BaseBdev3", 00:16:39.814 "uuid": "7a31a5d7-add0-41e2-9e6c-010d2d7a5d97", 00:16:39.814 "is_configured": true, 00:16:39.814 "data_offset": 2048, 00:16:39.814 "data_size": 63488 00:16:39.814 }, 00:16:39.814 { 00:16:39.814 "name": "BaseBdev4", 00:16:39.814 "uuid": "52fae810-d729-4de2-942a-421e149bd9c4", 00:16:39.814 "is_configured": true, 00:16:39.814 "data_offset": 2048, 00:16:39.814 "data_size": 63488 00:16:39.814 } 00:16:39.814 ] 00:16:39.814 } 00:16:39.814 } 00:16:39.814 }' 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:39.814 BaseBdev2 00:16:39.814 BaseBdev3 00:16:39.814 BaseBdev4' 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.814 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.073 [2024-11-06 09:09:16.442385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:40.073 [2024-11-06 09:09:16.442432] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.073 [2024-11-06 09:09:16.442498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.073 "name": "Existed_Raid", 00:16:40.073 "uuid": "9e18f0ba-5329-4e70-b680-7a85882114d0", 00:16:40.073 "strip_size_kb": 64, 00:16:40.073 "state": "offline", 00:16:40.073 "raid_level": "concat", 00:16:40.073 "superblock": true, 00:16:40.073 "num_base_bdevs": 4, 00:16:40.073 "num_base_bdevs_discovered": 3, 00:16:40.073 "num_base_bdevs_operational": 3, 00:16:40.073 "base_bdevs_list": [ 00:16:40.073 { 00:16:40.073 "name": null, 00:16:40.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.073 "is_configured": false, 00:16:40.073 "data_offset": 0, 00:16:40.073 "data_size": 63488 00:16:40.073 }, 00:16:40.073 { 00:16:40.073 "name": "BaseBdev2", 00:16:40.073 "uuid": "6fad1b5d-8beb-41ad-8435-2c7eceea748f", 00:16:40.073 "is_configured": true, 00:16:40.073 "data_offset": 2048, 00:16:40.073 "data_size": 63488 00:16:40.073 }, 00:16:40.073 { 00:16:40.073 "name": "BaseBdev3", 00:16:40.073 "uuid": "7a31a5d7-add0-41e2-9e6c-010d2d7a5d97", 00:16:40.073 "is_configured": true, 00:16:40.073 "data_offset": 2048, 00:16:40.073 "data_size": 63488 00:16:40.073 }, 00:16:40.073 { 00:16:40.073 "name": "BaseBdev4", 00:16:40.073 "uuid": "52fae810-d729-4de2-942a-421e149bd9c4", 00:16:40.073 "is_configured": true, 00:16:40.073 "data_offset": 2048, 00:16:40.073 "data_size": 63488 00:16:40.073 } 00:16:40.073 ] 00:16:40.073 }' 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.073 09:09:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.640 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:40.640 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.640 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.640 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.640 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.640 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:40.640 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.640 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:40.640 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.640 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:40.640 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.640 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.640 [2024-11-06 09:09:17.101199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.898 [2024-11-06 09:09:17.252561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.898 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.898 [2024-11-06 09:09:17.390145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:40.898 [2024-11-06 09:09:17.390429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.157 BaseBdev2 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.157 [ 00:16:41.157 { 00:16:41.157 "name": "BaseBdev2", 00:16:41.157 "aliases": [ 00:16:41.157 "b39cd2f2-64de-4b74-ad4a-6cdd958a6b42" 00:16:41.157 ], 00:16:41.157 "product_name": "Malloc disk", 00:16:41.157 "block_size": 512, 00:16:41.157 "num_blocks": 65536, 00:16:41.157 "uuid": "b39cd2f2-64de-4b74-ad4a-6cdd958a6b42", 00:16:41.157 "assigned_rate_limits": { 00:16:41.157 "rw_ios_per_sec": 0, 00:16:41.157 "rw_mbytes_per_sec": 0, 00:16:41.157 "r_mbytes_per_sec": 0, 00:16:41.157 "w_mbytes_per_sec": 0 00:16:41.157 }, 00:16:41.157 "claimed": false, 00:16:41.157 "zoned": false, 00:16:41.157 "supported_io_types": { 00:16:41.157 "read": true, 00:16:41.157 "write": true, 00:16:41.157 "unmap": true, 00:16:41.157 "flush": true, 00:16:41.157 "reset": true, 00:16:41.157 "nvme_admin": false, 00:16:41.157 "nvme_io": false, 00:16:41.157 "nvme_io_md": false, 00:16:41.157 "write_zeroes": true, 00:16:41.157 "zcopy": true, 00:16:41.157 "get_zone_info": false, 00:16:41.157 "zone_management": false, 00:16:41.157 "zone_append": false, 00:16:41.157 "compare": false, 00:16:41.157 "compare_and_write": false, 00:16:41.157 "abort": true, 00:16:41.157 "seek_hole": false, 00:16:41.157 "seek_data": false, 00:16:41.157 "copy": true, 00:16:41.157 "nvme_iov_md": false 00:16:41.157 }, 00:16:41.157 "memory_domains": [ 00:16:41.157 { 00:16:41.157 "dma_device_id": "system", 00:16:41.157 "dma_device_type": 1 00:16:41.157 }, 00:16:41.157 { 00:16:41.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.157 "dma_device_type": 2 00:16:41.157 } 00:16:41.157 ], 00:16:41.157 "driver_specific": {} 00:16:41.157 } 00:16:41.157 ] 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.157 BaseBdev3 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.157 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.157 [ 00:16:41.157 { 00:16:41.157 "name": "BaseBdev3", 00:16:41.157 "aliases": [ 00:16:41.157 "47740e3b-3c33-4721-820a-36797232b8ee" 00:16:41.157 ], 00:16:41.157 "product_name": "Malloc disk", 00:16:41.157 "block_size": 512, 00:16:41.157 "num_blocks": 65536, 00:16:41.157 "uuid": "47740e3b-3c33-4721-820a-36797232b8ee", 00:16:41.157 "assigned_rate_limits": { 00:16:41.157 "rw_ios_per_sec": 0, 00:16:41.157 "rw_mbytes_per_sec": 0, 00:16:41.157 "r_mbytes_per_sec": 0, 00:16:41.157 "w_mbytes_per_sec": 0 00:16:41.157 }, 00:16:41.157 "claimed": false, 00:16:41.157 "zoned": false, 00:16:41.157 "supported_io_types": { 00:16:41.157 "read": true, 00:16:41.157 "write": true, 00:16:41.157 "unmap": true, 00:16:41.157 "flush": true, 00:16:41.157 "reset": true, 00:16:41.157 "nvme_admin": false, 00:16:41.157 "nvme_io": false, 00:16:41.157 "nvme_io_md": false, 00:16:41.157 "write_zeroes": true, 00:16:41.157 "zcopy": true, 00:16:41.157 "get_zone_info": false, 00:16:41.157 "zone_management": false, 00:16:41.157 "zone_append": false, 00:16:41.157 "compare": false, 00:16:41.157 "compare_and_write": false, 00:16:41.157 "abort": true, 00:16:41.157 "seek_hole": false, 00:16:41.157 "seek_data": false, 00:16:41.157 "copy": true, 00:16:41.157 "nvme_iov_md": false 00:16:41.157 }, 00:16:41.157 "memory_domains": [ 00:16:41.157 { 00:16:41.157 "dma_device_id": "system", 00:16:41.158 "dma_device_type": 1 00:16:41.158 }, 00:16:41.158 { 00:16:41.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.158 "dma_device_type": 2 00:16:41.158 } 00:16:41.158 ], 00:16:41.158 "driver_specific": {} 00:16:41.158 } 00:16:41.158 ] 00:16:41.158 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.158 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:41.158 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:41.158 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:41.158 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:41.158 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.158 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.417 BaseBdev4 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.417 [ 00:16:41.417 { 00:16:41.417 "name": "BaseBdev4", 00:16:41.417 "aliases": [ 00:16:41.417 "ee508fea-812d-4a37-937a-2bee706c70c8" 00:16:41.417 ], 00:16:41.417 "product_name": "Malloc disk", 00:16:41.417 "block_size": 512, 00:16:41.417 "num_blocks": 65536, 00:16:41.417 "uuid": "ee508fea-812d-4a37-937a-2bee706c70c8", 00:16:41.417 "assigned_rate_limits": { 00:16:41.417 "rw_ios_per_sec": 0, 00:16:41.417 "rw_mbytes_per_sec": 0, 00:16:41.417 "r_mbytes_per_sec": 0, 00:16:41.417 "w_mbytes_per_sec": 0 00:16:41.417 }, 00:16:41.417 "claimed": false, 00:16:41.417 "zoned": false, 00:16:41.417 "supported_io_types": { 00:16:41.417 "read": true, 00:16:41.417 "write": true, 00:16:41.417 "unmap": true, 00:16:41.417 "flush": true, 00:16:41.417 "reset": true, 00:16:41.417 "nvme_admin": false, 00:16:41.417 "nvme_io": false, 00:16:41.417 "nvme_io_md": false, 00:16:41.417 "write_zeroes": true, 00:16:41.417 "zcopy": true, 00:16:41.417 "get_zone_info": false, 00:16:41.417 "zone_management": false, 00:16:41.417 "zone_append": false, 00:16:41.417 "compare": false, 00:16:41.417 "compare_and_write": false, 00:16:41.417 "abort": true, 00:16:41.417 "seek_hole": false, 00:16:41.417 "seek_data": false, 00:16:41.417 "copy": true, 00:16:41.417 "nvme_iov_md": false 00:16:41.417 }, 00:16:41.417 "memory_domains": [ 00:16:41.417 { 00:16:41.417 "dma_device_id": "system", 00:16:41.417 "dma_device_type": 1 00:16:41.417 }, 00:16:41.417 { 00:16:41.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.417 "dma_device_type": 2 00:16:41.417 } 00:16:41.417 ], 00:16:41.417 "driver_specific": {} 00:16:41.417 } 00:16:41.417 ] 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.417 [2024-11-06 09:09:17.759724] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:41.417 [2024-11-06 09:09:17.759947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:41.417 [2024-11-06 09:09:17.759995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.417 [2024-11-06 09:09:17.762635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:41.417 [2024-11-06 09:09:17.762709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.417 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.417 "name": "Existed_Raid", 00:16:41.417 "uuid": "ff9204a8-5c4e-4c1d-a7f5-309f98949b6b", 00:16:41.417 "strip_size_kb": 64, 00:16:41.417 "state": "configuring", 00:16:41.417 "raid_level": "concat", 00:16:41.417 "superblock": true, 00:16:41.417 "num_base_bdevs": 4, 00:16:41.417 "num_base_bdevs_discovered": 3, 00:16:41.417 "num_base_bdevs_operational": 4, 00:16:41.417 "base_bdevs_list": [ 00:16:41.417 { 00:16:41.417 "name": "BaseBdev1", 00:16:41.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.417 "is_configured": false, 00:16:41.417 "data_offset": 0, 00:16:41.417 "data_size": 0 00:16:41.417 }, 00:16:41.417 { 00:16:41.417 "name": "BaseBdev2", 00:16:41.417 "uuid": "b39cd2f2-64de-4b74-ad4a-6cdd958a6b42", 00:16:41.417 "is_configured": true, 00:16:41.417 "data_offset": 2048, 00:16:41.417 "data_size": 63488 00:16:41.417 }, 00:16:41.417 { 00:16:41.417 "name": "BaseBdev3", 00:16:41.417 "uuid": "47740e3b-3c33-4721-820a-36797232b8ee", 00:16:41.417 "is_configured": true, 00:16:41.417 "data_offset": 2048, 00:16:41.417 "data_size": 63488 00:16:41.417 }, 00:16:41.417 { 00:16:41.417 "name": "BaseBdev4", 00:16:41.417 "uuid": "ee508fea-812d-4a37-937a-2bee706c70c8", 00:16:41.417 "is_configured": true, 00:16:41.418 "data_offset": 2048, 00:16:41.418 "data_size": 63488 00:16:41.418 } 00:16:41.418 ] 00:16:41.418 }' 00:16:41.418 09:09:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.418 09:09:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.984 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:41.984 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.984 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.984 [2024-11-06 09:09:18.279902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:41.984 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.984 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.985 "name": "Existed_Raid", 00:16:41.985 "uuid": "ff9204a8-5c4e-4c1d-a7f5-309f98949b6b", 00:16:41.985 "strip_size_kb": 64, 00:16:41.985 "state": "configuring", 00:16:41.985 "raid_level": "concat", 00:16:41.985 "superblock": true, 00:16:41.985 "num_base_bdevs": 4, 00:16:41.985 "num_base_bdevs_discovered": 2, 00:16:41.985 "num_base_bdevs_operational": 4, 00:16:41.985 "base_bdevs_list": [ 00:16:41.985 { 00:16:41.985 "name": "BaseBdev1", 00:16:41.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.985 "is_configured": false, 00:16:41.985 "data_offset": 0, 00:16:41.985 "data_size": 0 00:16:41.985 }, 00:16:41.985 { 00:16:41.985 "name": null, 00:16:41.985 "uuid": "b39cd2f2-64de-4b74-ad4a-6cdd958a6b42", 00:16:41.985 "is_configured": false, 00:16:41.985 "data_offset": 0, 00:16:41.985 "data_size": 63488 00:16:41.985 }, 00:16:41.985 { 00:16:41.985 "name": "BaseBdev3", 00:16:41.985 "uuid": "47740e3b-3c33-4721-820a-36797232b8ee", 00:16:41.985 "is_configured": true, 00:16:41.985 "data_offset": 2048, 00:16:41.985 "data_size": 63488 00:16:41.985 }, 00:16:41.985 { 00:16:41.985 "name": "BaseBdev4", 00:16:41.985 "uuid": "ee508fea-812d-4a37-937a-2bee706c70c8", 00:16:41.985 "is_configured": true, 00:16:41.985 "data_offset": 2048, 00:16:41.985 "data_size": 63488 00:16:41.985 } 00:16:41.985 ] 00:16:41.985 }' 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.985 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.552 [2024-11-06 09:09:18.895698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.552 BaseBdev1 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.552 [ 00:16:42.552 { 00:16:42.552 "name": "BaseBdev1", 00:16:42.552 "aliases": [ 00:16:42.552 "f025ffe3-e156-4ba9-90b3-a6ecf32aa30c" 00:16:42.552 ], 00:16:42.552 "product_name": "Malloc disk", 00:16:42.552 "block_size": 512, 00:16:42.552 "num_blocks": 65536, 00:16:42.552 "uuid": "f025ffe3-e156-4ba9-90b3-a6ecf32aa30c", 00:16:42.552 "assigned_rate_limits": { 00:16:42.552 "rw_ios_per_sec": 0, 00:16:42.552 "rw_mbytes_per_sec": 0, 00:16:42.552 "r_mbytes_per_sec": 0, 00:16:42.552 "w_mbytes_per_sec": 0 00:16:42.552 }, 00:16:42.552 "claimed": true, 00:16:42.552 "claim_type": "exclusive_write", 00:16:42.552 "zoned": false, 00:16:42.552 "supported_io_types": { 00:16:42.552 "read": true, 00:16:42.552 "write": true, 00:16:42.552 "unmap": true, 00:16:42.552 "flush": true, 00:16:42.552 "reset": true, 00:16:42.552 "nvme_admin": false, 00:16:42.552 "nvme_io": false, 00:16:42.552 "nvme_io_md": false, 00:16:42.552 "write_zeroes": true, 00:16:42.552 "zcopy": true, 00:16:42.552 "get_zone_info": false, 00:16:42.552 "zone_management": false, 00:16:42.552 "zone_append": false, 00:16:42.552 "compare": false, 00:16:42.552 "compare_and_write": false, 00:16:42.552 "abort": true, 00:16:42.552 "seek_hole": false, 00:16:42.552 "seek_data": false, 00:16:42.552 "copy": true, 00:16:42.552 "nvme_iov_md": false 00:16:42.552 }, 00:16:42.552 "memory_domains": [ 00:16:42.552 { 00:16:42.552 "dma_device_id": "system", 00:16:42.552 "dma_device_type": 1 00:16:42.552 }, 00:16:42.552 { 00:16:42.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.552 "dma_device_type": 2 00:16:42.552 } 00:16:42.552 ], 00:16:42.552 "driver_specific": {} 00:16:42.552 } 00:16:42.552 ] 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.552 "name": "Existed_Raid", 00:16:42.552 "uuid": "ff9204a8-5c4e-4c1d-a7f5-309f98949b6b", 00:16:42.552 "strip_size_kb": 64, 00:16:42.552 "state": "configuring", 00:16:42.552 "raid_level": "concat", 00:16:42.552 "superblock": true, 00:16:42.552 "num_base_bdevs": 4, 00:16:42.552 "num_base_bdevs_discovered": 3, 00:16:42.552 "num_base_bdevs_operational": 4, 00:16:42.552 "base_bdevs_list": [ 00:16:42.552 { 00:16:42.552 "name": "BaseBdev1", 00:16:42.552 "uuid": "f025ffe3-e156-4ba9-90b3-a6ecf32aa30c", 00:16:42.552 "is_configured": true, 00:16:42.552 "data_offset": 2048, 00:16:42.552 "data_size": 63488 00:16:42.552 }, 00:16:42.552 { 00:16:42.552 "name": null, 00:16:42.552 "uuid": "b39cd2f2-64de-4b74-ad4a-6cdd958a6b42", 00:16:42.552 "is_configured": false, 00:16:42.552 "data_offset": 0, 00:16:42.552 "data_size": 63488 00:16:42.552 }, 00:16:42.552 { 00:16:42.552 "name": "BaseBdev3", 00:16:42.552 "uuid": "47740e3b-3c33-4721-820a-36797232b8ee", 00:16:42.552 "is_configured": true, 00:16:42.552 "data_offset": 2048, 00:16:42.552 "data_size": 63488 00:16:42.552 }, 00:16:42.552 { 00:16:42.552 "name": "BaseBdev4", 00:16:42.552 "uuid": "ee508fea-812d-4a37-937a-2bee706c70c8", 00:16:42.552 "is_configured": true, 00:16:42.552 "data_offset": 2048, 00:16:42.552 "data_size": 63488 00:16:42.552 } 00:16:42.552 ] 00:16:42.552 }' 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.552 09:09:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.117 [2024-11-06 09:09:19.519976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.117 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.117 "name": "Existed_Raid", 00:16:43.117 "uuid": "ff9204a8-5c4e-4c1d-a7f5-309f98949b6b", 00:16:43.117 "strip_size_kb": 64, 00:16:43.117 "state": "configuring", 00:16:43.117 "raid_level": "concat", 00:16:43.117 "superblock": true, 00:16:43.117 "num_base_bdevs": 4, 00:16:43.117 "num_base_bdevs_discovered": 2, 00:16:43.117 "num_base_bdevs_operational": 4, 00:16:43.117 "base_bdevs_list": [ 00:16:43.117 { 00:16:43.117 "name": "BaseBdev1", 00:16:43.117 "uuid": "f025ffe3-e156-4ba9-90b3-a6ecf32aa30c", 00:16:43.117 "is_configured": true, 00:16:43.117 "data_offset": 2048, 00:16:43.117 "data_size": 63488 00:16:43.117 }, 00:16:43.117 { 00:16:43.117 "name": null, 00:16:43.117 "uuid": "b39cd2f2-64de-4b74-ad4a-6cdd958a6b42", 00:16:43.117 "is_configured": false, 00:16:43.117 "data_offset": 0, 00:16:43.117 "data_size": 63488 00:16:43.117 }, 00:16:43.117 { 00:16:43.117 "name": null, 00:16:43.117 "uuid": "47740e3b-3c33-4721-820a-36797232b8ee", 00:16:43.117 "is_configured": false, 00:16:43.117 "data_offset": 0, 00:16:43.117 "data_size": 63488 00:16:43.117 }, 00:16:43.117 { 00:16:43.117 "name": "BaseBdev4", 00:16:43.117 "uuid": "ee508fea-812d-4a37-937a-2bee706c70c8", 00:16:43.117 "is_configured": true, 00:16:43.117 "data_offset": 2048, 00:16:43.117 "data_size": 63488 00:16:43.117 } 00:16:43.117 ] 00:16:43.117 }' 00:16:43.118 09:09:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.118 09:09:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.683 [2024-11-06 09:09:20.148203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.683 "name": "Existed_Raid", 00:16:43.683 "uuid": "ff9204a8-5c4e-4c1d-a7f5-309f98949b6b", 00:16:43.683 "strip_size_kb": 64, 00:16:43.683 "state": "configuring", 00:16:43.683 "raid_level": "concat", 00:16:43.683 "superblock": true, 00:16:43.683 "num_base_bdevs": 4, 00:16:43.683 "num_base_bdevs_discovered": 3, 00:16:43.683 "num_base_bdevs_operational": 4, 00:16:43.683 "base_bdevs_list": [ 00:16:43.683 { 00:16:43.683 "name": "BaseBdev1", 00:16:43.683 "uuid": "f025ffe3-e156-4ba9-90b3-a6ecf32aa30c", 00:16:43.683 "is_configured": true, 00:16:43.683 "data_offset": 2048, 00:16:43.683 "data_size": 63488 00:16:43.683 }, 00:16:43.683 { 00:16:43.683 "name": null, 00:16:43.683 "uuid": "b39cd2f2-64de-4b74-ad4a-6cdd958a6b42", 00:16:43.683 "is_configured": false, 00:16:43.683 "data_offset": 0, 00:16:43.683 "data_size": 63488 00:16:43.683 }, 00:16:43.683 { 00:16:43.683 "name": "BaseBdev3", 00:16:43.683 "uuid": "47740e3b-3c33-4721-820a-36797232b8ee", 00:16:43.683 "is_configured": true, 00:16:43.683 "data_offset": 2048, 00:16:43.683 "data_size": 63488 00:16:43.683 }, 00:16:43.683 { 00:16:43.683 "name": "BaseBdev4", 00:16:43.683 "uuid": "ee508fea-812d-4a37-937a-2bee706c70c8", 00:16:43.683 "is_configured": true, 00:16:43.683 "data_offset": 2048, 00:16:43.683 "data_size": 63488 00:16:43.683 } 00:16:43.683 ] 00:16:43.683 }' 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.683 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.288 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.288 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:44.288 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.288 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:44.288 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:44.288 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.288 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.288 [2024-11-06 09:09:20.740400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.546 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.546 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.547 "name": "Existed_Raid", 00:16:44.547 "uuid": "ff9204a8-5c4e-4c1d-a7f5-309f98949b6b", 00:16:44.547 "strip_size_kb": 64, 00:16:44.547 "state": "configuring", 00:16:44.547 "raid_level": "concat", 00:16:44.547 "superblock": true, 00:16:44.547 "num_base_bdevs": 4, 00:16:44.547 "num_base_bdevs_discovered": 2, 00:16:44.547 "num_base_bdevs_operational": 4, 00:16:44.547 "base_bdevs_list": [ 00:16:44.547 { 00:16:44.547 "name": null, 00:16:44.547 "uuid": "f025ffe3-e156-4ba9-90b3-a6ecf32aa30c", 00:16:44.547 "is_configured": false, 00:16:44.547 "data_offset": 0, 00:16:44.547 "data_size": 63488 00:16:44.547 }, 00:16:44.547 { 00:16:44.547 "name": null, 00:16:44.547 "uuid": "b39cd2f2-64de-4b74-ad4a-6cdd958a6b42", 00:16:44.547 "is_configured": false, 00:16:44.547 "data_offset": 0, 00:16:44.547 "data_size": 63488 00:16:44.547 }, 00:16:44.547 { 00:16:44.547 "name": "BaseBdev3", 00:16:44.547 "uuid": "47740e3b-3c33-4721-820a-36797232b8ee", 00:16:44.547 "is_configured": true, 00:16:44.547 "data_offset": 2048, 00:16:44.547 "data_size": 63488 00:16:44.547 }, 00:16:44.547 { 00:16:44.547 "name": "BaseBdev4", 00:16:44.547 "uuid": "ee508fea-812d-4a37-937a-2bee706c70c8", 00:16:44.547 "is_configured": true, 00:16:44.547 "data_offset": 2048, 00:16:44.547 "data_size": 63488 00:16:44.547 } 00:16:44.547 ] 00:16:44.547 }' 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.547 09:09:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.113 [2024-11-06 09:09:21.410991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.113 "name": "Existed_Raid", 00:16:45.113 "uuid": "ff9204a8-5c4e-4c1d-a7f5-309f98949b6b", 00:16:45.113 "strip_size_kb": 64, 00:16:45.113 "state": "configuring", 00:16:45.113 "raid_level": "concat", 00:16:45.113 "superblock": true, 00:16:45.113 "num_base_bdevs": 4, 00:16:45.113 "num_base_bdevs_discovered": 3, 00:16:45.113 "num_base_bdevs_operational": 4, 00:16:45.113 "base_bdevs_list": [ 00:16:45.113 { 00:16:45.113 "name": null, 00:16:45.113 "uuid": "f025ffe3-e156-4ba9-90b3-a6ecf32aa30c", 00:16:45.113 "is_configured": false, 00:16:45.113 "data_offset": 0, 00:16:45.113 "data_size": 63488 00:16:45.113 }, 00:16:45.113 { 00:16:45.113 "name": "BaseBdev2", 00:16:45.113 "uuid": "b39cd2f2-64de-4b74-ad4a-6cdd958a6b42", 00:16:45.113 "is_configured": true, 00:16:45.113 "data_offset": 2048, 00:16:45.113 "data_size": 63488 00:16:45.113 }, 00:16:45.113 { 00:16:45.113 "name": "BaseBdev3", 00:16:45.113 "uuid": "47740e3b-3c33-4721-820a-36797232b8ee", 00:16:45.113 "is_configured": true, 00:16:45.113 "data_offset": 2048, 00:16:45.113 "data_size": 63488 00:16:45.113 }, 00:16:45.113 { 00:16:45.113 "name": "BaseBdev4", 00:16:45.113 "uuid": "ee508fea-812d-4a37-937a-2bee706c70c8", 00:16:45.113 "is_configured": true, 00:16:45.113 "data_offset": 2048, 00:16:45.113 "data_size": 63488 00:16:45.113 } 00:16:45.113 ] 00:16:45.113 }' 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.113 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.680 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:45.680 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.680 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.680 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.680 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.680 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:45.680 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.680 09:09:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:45.680 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.680 09:09:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f025ffe3-e156-4ba9-90b3-a6ecf32aa30c 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.680 [2024-11-06 09:09:22.074675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:45.680 [2024-11-06 09:09:22.075023] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:45.680 [2024-11-06 09:09:22.075042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:45.680 [2024-11-06 09:09:22.075370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:45.680 NewBaseBdev 00:16:45.680 [2024-11-06 09:09:22.075714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:45.680 [2024-11-06 09:09:22.075857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:45.680 [2024-11-06 09:09:22.076079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.680 [ 00:16:45.680 { 00:16:45.680 "name": "NewBaseBdev", 00:16:45.680 "aliases": [ 00:16:45.680 "f025ffe3-e156-4ba9-90b3-a6ecf32aa30c" 00:16:45.680 ], 00:16:45.680 "product_name": "Malloc disk", 00:16:45.680 "block_size": 512, 00:16:45.680 "num_blocks": 65536, 00:16:45.680 "uuid": "f025ffe3-e156-4ba9-90b3-a6ecf32aa30c", 00:16:45.680 "assigned_rate_limits": { 00:16:45.680 "rw_ios_per_sec": 0, 00:16:45.680 "rw_mbytes_per_sec": 0, 00:16:45.680 "r_mbytes_per_sec": 0, 00:16:45.680 "w_mbytes_per_sec": 0 00:16:45.680 }, 00:16:45.680 "claimed": true, 00:16:45.680 "claim_type": "exclusive_write", 00:16:45.680 "zoned": false, 00:16:45.680 "supported_io_types": { 00:16:45.680 "read": true, 00:16:45.680 "write": true, 00:16:45.680 "unmap": true, 00:16:45.680 "flush": true, 00:16:45.680 "reset": true, 00:16:45.680 "nvme_admin": false, 00:16:45.680 "nvme_io": false, 00:16:45.680 "nvme_io_md": false, 00:16:45.680 "write_zeroes": true, 00:16:45.680 "zcopy": true, 00:16:45.680 "get_zone_info": false, 00:16:45.680 "zone_management": false, 00:16:45.680 "zone_append": false, 00:16:45.680 "compare": false, 00:16:45.680 "compare_and_write": false, 00:16:45.680 "abort": true, 00:16:45.680 "seek_hole": false, 00:16:45.680 "seek_data": false, 00:16:45.680 "copy": true, 00:16:45.680 "nvme_iov_md": false 00:16:45.680 }, 00:16:45.680 "memory_domains": [ 00:16:45.680 { 00:16:45.680 "dma_device_id": "system", 00:16:45.680 "dma_device_type": 1 00:16:45.680 }, 00:16:45.680 { 00:16:45.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.680 "dma_device_type": 2 00:16:45.680 } 00:16:45.680 ], 00:16:45.680 "driver_specific": {} 00:16:45.680 } 00:16:45.680 ] 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.680 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.681 "name": "Existed_Raid", 00:16:45.681 "uuid": "ff9204a8-5c4e-4c1d-a7f5-309f98949b6b", 00:16:45.681 "strip_size_kb": 64, 00:16:45.681 "state": "online", 00:16:45.681 "raid_level": "concat", 00:16:45.681 "superblock": true, 00:16:45.681 "num_base_bdevs": 4, 00:16:45.681 "num_base_bdevs_discovered": 4, 00:16:45.681 "num_base_bdevs_operational": 4, 00:16:45.681 "base_bdevs_list": [ 00:16:45.681 { 00:16:45.681 "name": "NewBaseBdev", 00:16:45.681 "uuid": "f025ffe3-e156-4ba9-90b3-a6ecf32aa30c", 00:16:45.681 "is_configured": true, 00:16:45.681 "data_offset": 2048, 00:16:45.681 "data_size": 63488 00:16:45.681 }, 00:16:45.681 { 00:16:45.681 "name": "BaseBdev2", 00:16:45.681 "uuid": "b39cd2f2-64de-4b74-ad4a-6cdd958a6b42", 00:16:45.681 "is_configured": true, 00:16:45.681 "data_offset": 2048, 00:16:45.681 "data_size": 63488 00:16:45.681 }, 00:16:45.681 { 00:16:45.681 "name": "BaseBdev3", 00:16:45.681 "uuid": "47740e3b-3c33-4721-820a-36797232b8ee", 00:16:45.681 "is_configured": true, 00:16:45.681 "data_offset": 2048, 00:16:45.681 "data_size": 63488 00:16:45.681 }, 00:16:45.681 { 00:16:45.681 "name": "BaseBdev4", 00:16:45.681 "uuid": "ee508fea-812d-4a37-937a-2bee706c70c8", 00:16:45.681 "is_configured": true, 00:16:45.681 "data_offset": 2048, 00:16:45.681 "data_size": 63488 00:16:45.681 } 00:16:45.681 ] 00:16:45.681 }' 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.681 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.246 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:46.246 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:46.246 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:46.246 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:46.246 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:46.247 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:46.247 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:46.247 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:46.247 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.247 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.247 [2024-11-06 09:09:22.651355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.247 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.247 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:46.247 "name": "Existed_Raid", 00:16:46.247 "aliases": [ 00:16:46.247 "ff9204a8-5c4e-4c1d-a7f5-309f98949b6b" 00:16:46.247 ], 00:16:46.247 "product_name": "Raid Volume", 00:16:46.247 "block_size": 512, 00:16:46.247 "num_blocks": 253952, 00:16:46.247 "uuid": "ff9204a8-5c4e-4c1d-a7f5-309f98949b6b", 00:16:46.247 "assigned_rate_limits": { 00:16:46.247 "rw_ios_per_sec": 0, 00:16:46.247 "rw_mbytes_per_sec": 0, 00:16:46.247 "r_mbytes_per_sec": 0, 00:16:46.247 "w_mbytes_per_sec": 0 00:16:46.247 }, 00:16:46.247 "claimed": false, 00:16:46.247 "zoned": false, 00:16:46.247 "supported_io_types": { 00:16:46.247 "read": true, 00:16:46.247 "write": true, 00:16:46.247 "unmap": true, 00:16:46.247 "flush": true, 00:16:46.247 "reset": true, 00:16:46.247 "nvme_admin": false, 00:16:46.247 "nvme_io": false, 00:16:46.247 "nvme_io_md": false, 00:16:46.247 "write_zeroes": true, 00:16:46.247 "zcopy": false, 00:16:46.247 "get_zone_info": false, 00:16:46.247 "zone_management": false, 00:16:46.247 "zone_append": false, 00:16:46.247 "compare": false, 00:16:46.247 "compare_and_write": false, 00:16:46.247 "abort": false, 00:16:46.247 "seek_hole": false, 00:16:46.247 "seek_data": false, 00:16:46.247 "copy": false, 00:16:46.247 "nvme_iov_md": false 00:16:46.247 }, 00:16:46.247 "memory_domains": [ 00:16:46.247 { 00:16:46.247 "dma_device_id": "system", 00:16:46.247 "dma_device_type": 1 00:16:46.247 }, 00:16:46.247 { 00:16:46.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.247 "dma_device_type": 2 00:16:46.247 }, 00:16:46.247 { 00:16:46.247 "dma_device_id": "system", 00:16:46.247 "dma_device_type": 1 00:16:46.247 }, 00:16:46.247 { 00:16:46.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.247 "dma_device_type": 2 00:16:46.247 }, 00:16:46.247 { 00:16:46.247 "dma_device_id": "system", 00:16:46.247 "dma_device_type": 1 00:16:46.247 }, 00:16:46.247 { 00:16:46.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.247 "dma_device_type": 2 00:16:46.247 }, 00:16:46.247 { 00:16:46.247 "dma_device_id": "system", 00:16:46.247 "dma_device_type": 1 00:16:46.247 }, 00:16:46.247 { 00:16:46.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.247 "dma_device_type": 2 00:16:46.247 } 00:16:46.247 ], 00:16:46.247 "driver_specific": { 00:16:46.247 "raid": { 00:16:46.247 "uuid": "ff9204a8-5c4e-4c1d-a7f5-309f98949b6b", 00:16:46.247 "strip_size_kb": 64, 00:16:46.247 "state": "online", 00:16:46.247 "raid_level": "concat", 00:16:46.247 "superblock": true, 00:16:46.247 "num_base_bdevs": 4, 00:16:46.247 "num_base_bdevs_discovered": 4, 00:16:46.247 "num_base_bdevs_operational": 4, 00:16:46.247 "base_bdevs_list": [ 00:16:46.247 { 00:16:46.247 "name": "NewBaseBdev", 00:16:46.247 "uuid": "f025ffe3-e156-4ba9-90b3-a6ecf32aa30c", 00:16:46.247 "is_configured": true, 00:16:46.247 "data_offset": 2048, 00:16:46.247 "data_size": 63488 00:16:46.247 }, 00:16:46.247 { 00:16:46.247 "name": "BaseBdev2", 00:16:46.247 "uuid": "b39cd2f2-64de-4b74-ad4a-6cdd958a6b42", 00:16:46.247 "is_configured": true, 00:16:46.247 "data_offset": 2048, 00:16:46.247 "data_size": 63488 00:16:46.247 }, 00:16:46.247 { 00:16:46.247 "name": "BaseBdev3", 00:16:46.247 "uuid": "47740e3b-3c33-4721-820a-36797232b8ee", 00:16:46.247 "is_configured": true, 00:16:46.247 "data_offset": 2048, 00:16:46.247 "data_size": 63488 00:16:46.247 }, 00:16:46.247 { 00:16:46.247 "name": "BaseBdev4", 00:16:46.247 "uuid": "ee508fea-812d-4a37-937a-2bee706c70c8", 00:16:46.247 "is_configured": true, 00:16:46.247 "data_offset": 2048, 00:16:46.247 "data_size": 63488 00:16:46.247 } 00:16:46.247 ] 00:16:46.247 } 00:16:46.247 } 00:16:46.247 }' 00:16:46.247 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:46.247 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:46.247 BaseBdev2 00:16:46.247 BaseBdev3 00:16:46.247 BaseBdev4' 00:16:46.247 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.505 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.506 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.506 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.506 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.506 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.506 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:46.506 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:46.506 09:09:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:46.506 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.506 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.506 09:09:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.506 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:46.506 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:46.506 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:46.506 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.506 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.506 [2024-11-06 09:09:23.027050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:46.506 [2024-11-06 09:09:23.027214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.506 [2024-11-06 09:09:23.027332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.506 [2024-11-06 09:09:23.027425] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.506 [2024-11-06 09:09:23.027441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:46.506 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.506 09:09:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72006 00:16:46.506 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 72006 ']' 00:16:46.506 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 72006 00:16:46.506 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:46.764 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:46.764 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72006 00:16:46.764 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:46.764 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:46.764 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72006' 00:16:46.764 killing process with pid 72006 00:16:46.764 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 72006 00:16:46.764 [2024-11-06 09:09:23.070495] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.764 09:09:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 72006 00:16:47.022 [2024-11-06 09:09:23.439311] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.395 09:09:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:48.395 ************************************ 00:16:48.395 END TEST raid_state_function_test_sb 00:16:48.395 ************************************ 00:16:48.395 00:16:48.395 real 0m12.860s 00:16:48.395 user 0m21.151s 00:16:48.395 sys 0m1.908s 00:16:48.395 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:48.395 09:09:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.395 09:09:24 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:16:48.395 09:09:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:48.395 09:09:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:48.395 09:09:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.395 ************************************ 00:16:48.395 START TEST raid_superblock_test 00:16:48.395 ************************************ 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72688 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72688 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 72688 ']' 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:48.395 09:09:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.395 [2024-11-06 09:09:24.680969] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:16:48.395 [2024-11-06 09:09:24.681321] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72688 ] 00:16:48.395 [2024-11-06 09:09:24.869707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.653 [2024-11-06 09:09:25.005319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.912 [2024-11-06 09:09:25.222179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.912 [2024-11-06 09:09:25.222253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.479 malloc1 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.479 [2024-11-06 09:09:25.786758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:49.479 [2024-11-06 09:09:25.786853] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.479 [2024-11-06 09:09:25.786891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:49.479 [2024-11-06 09:09:25.786907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.479 [2024-11-06 09:09:25.789790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.479 [2024-11-06 09:09:25.790006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:49.479 pt1 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.479 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 malloc2 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 [2024-11-06 09:09:25.841692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.480 [2024-11-06 09:09:25.841808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.480 [2024-11-06 09:09:25.841899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:49.480 [2024-11-06 09:09:25.841927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.480 [2024-11-06 09:09:25.845507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.480 [2024-11-06 09:09:25.845568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.480 pt2 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 malloc3 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 [2024-11-06 09:09:25.925501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:49.480 [2024-11-06 09:09:25.925747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.480 [2024-11-06 09:09:25.925940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:49.480 [2024-11-06 09:09:25.926088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.480 [2024-11-06 09:09:25.929650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.480 [2024-11-06 09:09:25.929860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:49.480 pt3 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 malloc4 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 [2024-11-06 09:09:25.991324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:49.480 [2024-11-06 09:09:25.991411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.480 [2024-11-06 09:09:25.991456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:49.480 [2024-11-06 09:09:25.991473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.480 [2024-11-06 09:09:25.995043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.480 [2024-11-06 09:09:25.995108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:49.480 pt4 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.480 09:09:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.480 [2024-11-06 09:09:26.003509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:49.480 [2024-11-06 09:09:26.006639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.480 [2024-11-06 09:09:26.006943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:49.480 [2024-11-06 09:09:26.007075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:49.480 [2024-11-06 09:09:26.007379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:49.480 [2024-11-06 09:09:26.007422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:49.480 [2024-11-06 09:09:26.007879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:49.480 [2024-11-06 09:09:26.008178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:49.480 [2024-11-06 09:09:26.008204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:49.480 [2024-11-06 09:09:26.008479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.480 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.480 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:49.480 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.480 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.480 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:49.480 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.480 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.480 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.480 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.480 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.480 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.739 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.739 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.739 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.739 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.739 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.739 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.739 "name": "raid_bdev1", 00:16:49.739 "uuid": "e98e34e3-20ad-4ed7-9a67-0ff9555e4194", 00:16:49.739 "strip_size_kb": 64, 00:16:49.739 "state": "online", 00:16:49.739 "raid_level": "concat", 00:16:49.739 "superblock": true, 00:16:49.739 "num_base_bdevs": 4, 00:16:49.739 "num_base_bdevs_discovered": 4, 00:16:49.739 "num_base_bdevs_operational": 4, 00:16:49.739 "base_bdevs_list": [ 00:16:49.739 { 00:16:49.739 "name": "pt1", 00:16:49.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.739 "is_configured": true, 00:16:49.739 "data_offset": 2048, 00:16:49.739 "data_size": 63488 00:16:49.739 }, 00:16:49.739 { 00:16:49.739 "name": "pt2", 00:16:49.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.739 "is_configured": true, 00:16:49.739 "data_offset": 2048, 00:16:49.739 "data_size": 63488 00:16:49.739 }, 00:16:49.739 { 00:16:49.739 "name": "pt3", 00:16:49.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:49.739 "is_configured": true, 00:16:49.739 "data_offset": 2048, 00:16:49.739 "data_size": 63488 00:16:49.739 }, 00:16:49.739 { 00:16:49.739 "name": "pt4", 00:16:49.739 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:49.739 "is_configured": true, 00:16:49.739 "data_offset": 2048, 00:16:49.739 "data_size": 63488 00:16:49.739 } 00:16:49.739 ] 00:16:49.739 }' 00:16:49.739 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.739 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.998 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:49.998 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:49.998 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:49.998 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:49.998 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:49.998 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:49.998 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.998 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.998 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.998 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:49.998 [2024-11-06 09:09:26.500120] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.998 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.257 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:50.257 "name": "raid_bdev1", 00:16:50.257 "aliases": [ 00:16:50.257 "e98e34e3-20ad-4ed7-9a67-0ff9555e4194" 00:16:50.257 ], 00:16:50.257 "product_name": "Raid Volume", 00:16:50.257 "block_size": 512, 00:16:50.257 "num_blocks": 253952, 00:16:50.257 "uuid": "e98e34e3-20ad-4ed7-9a67-0ff9555e4194", 00:16:50.257 "assigned_rate_limits": { 00:16:50.257 "rw_ios_per_sec": 0, 00:16:50.257 "rw_mbytes_per_sec": 0, 00:16:50.257 "r_mbytes_per_sec": 0, 00:16:50.257 "w_mbytes_per_sec": 0 00:16:50.257 }, 00:16:50.257 "claimed": false, 00:16:50.257 "zoned": false, 00:16:50.257 "supported_io_types": { 00:16:50.257 "read": true, 00:16:50.257 "write": true, 00:16:50.257 "unmap": true, 00:16:50.257 "flush": true, 00:16:50.257 "reset": true, 00:16:50.257 "nvme_admin": false, 00:16:50.257 "nvme_io": false, 00:16:50.257 "nvme_io_md": false, 00:16:50.257 "write_zeroes": true, 00:16:50.257 "zcopy": false, 00:16:50.257 "get_zone_info": false, 00:16:50.257 "zone_management": false, 00:16:50.257 "zone_append": false, 00:16:50.257 "compare": false, 00:16:50.257 "compare_and_write": false, 00:16:50.257 "abort": false, 00:16:50.257 "seek_hole": false, 00:16:50.257 "seek_data": false, 00:16:50.257 "copy": false, 00:16:50.257 "nvme_iov_md": false 00:16:50.257 }, 00:16:50.257 "memory_domains": [ 00:16:50.257 { 00:16:50.257 "dma_device_id": "system", 00:16:50.257 "dma_device_type": 1 00:16:50.257 }, 00:16:50.257 { 00:16:50.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.257 "dma_device_type": 2 00:16:50.257 }, 00:16:50.257 { 00:16:50.257 "dma_device_id": "system", 00:16:50.257 "dma_device_type": 1 00:16:50.257 }, 00:16:50.257 { 00:16:50.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.257 "dma_device_type": 2 00:16:50.257 }, 00:16:50.258 { 00:16:50.258 "dma_device_id": "system", 00:16:50.258 "dma_device_type": 1 00:16:50.258 }, 00:16:50.258 { 00:16:50.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.258 "dma_device_type": 2 00:16:50.258 }, 00:16:50.258 { 00:16:50.258 "dma_device_id": "system", 00:16:50.258 "dma_device_type": 1 00:16:50.258 }, 00:16:50.258 { 00:16:50.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.258 "dma_device_type": 2 00:16:50.258 } 00:16:50.258 ], 00:16:50.258 "driver_specific": { 00:16:50.258 "raid": { 00:16:50.258 "uuid": "e98e34e3-20ad-4ed7-9a67-0ff9555e4194", 00:16:50.258 "strip_size_kb": 64, 00:16:50.258 "state": "online", 00:16:50.258 "raid_level": "concat", 00:16:50.258 "superblock": true, 00:16:50.258 "num_base_bdevs": 4, 00:16:50.258 "num_base_bdevs_discovered": 4, 00:16:50.258 "num_base_bdevs_operational": 4, 00:16:50.258 "base_bdevs_list": [ 00:16:50.258 { 00:16:50.258 "name": "pt1", 00:16:50.258 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.258 "is_configured": true, 00:16:50.258 "data_offset": 2048, 00:16:50.258 "data_size": 63488 00:16:50.258 }, 00:16:50.258 { 00:16:50.258 "name": "pt2", 00:16:50.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.258 "is_configured": true, 00:16:50.258 "data_offset": 2048, 00:16:50.258 "data_size": 63488 00:16:50.258 }, 00:16:50.258 { 00:16:50.258 "name": "pt3", 00:16:50.258 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.258 "is_configured": true, 00:16:50.258 "data_offset": 2048, 00:16:50.258 "data_size": 63488 00:16:50.258 }, 00:16:50.258 { 00:16:50.258 "name": "pt4", 00:16:50.258 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.258 "is_configured": true, 00:16:50.258 "data_offset": 2048, 00:16:50.258 "data_size": 63488 00:16:50.258 } 00:16:50.258 ] 00:16:50.258 } 00:16:50.258 } 00:16:50.258 }' 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:50.258 pt2 00:16:50.258 pt3 00:16:50.258 pt4' 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.258 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.517 [2024-11-06 09:09:26.876193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e98e34e3-20ad-4ed7-9a67-0ff9555e4194 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e98e34e3-20ad-4ed7-9a67-0ff9555e4194 ']' 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.517 [2024-11-06 09:09:26.923800] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.517 [2024-11-06 09:09:26.923829] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.517 [2024-11-06 09:09:26.923937] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.517 [2024-11-06 09:09:26.924027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.517 [2024-11-06 09:09:26.924049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:50.517 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.518 09:09:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.518 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.518 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:50.518 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:50.518 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.518 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.518 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.518 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:50.518 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:50.518 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.518 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.776 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.776 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:50.776 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:50.776 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:50.776 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:50.776 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:50.776 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.776 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:50.776 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:50.776 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:50.776 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.776 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.777 [2024-11-06 09:09:27.083908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:50.777 [2024-11-06 09:09:27.086561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:50.777 [2024-11-06 09:09:27.086632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:50.777 [2024-11-06 09:09:27.086688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:50.777 [2024-11-06 09:09:27.086761] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:50.777 [2024-11-06 09:09:27.086847] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:50.777 [2024-11-06 09:09:27.086883] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:50.777 [2024-11-06 09:09:27.086915] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:50.777 [2024-11-06 09:09:27.086937] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.777 [2024-11-06 09:09:27.086954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:50.777 request: 00:16:50.777 { 00:16:50.777 "name": "raid_bdev1", 00:16:50.777 "raid_level": "concat", 00:16:50.777 "base_bdevs": [ 00:16:50.777 "malloc1", 00:16:50.777 "malloc2", 00:16:50.777 "malloc3", 00:16:50.777 "malloc4" 00:16:50.777 ], 00:16:50.777 "strip_size_kb": 64, 00:16:50.777 "superblock": false, 00:16:50.777 "method": "bdev_raid_create", 00:16:50.777 "req_id": 1 00:16:50.777 } 00:16:50.777 Got JSON-RPC error response 00:16:50.777 response: 00:16:50.777 { 00:16:50.777 "code": -17, 00:16:50.777 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:50.777 } 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.777 [2024-11-06 09:09:27.147874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:50.777 [2024-11-06 09:09:27.148083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.777 [2024-11-06 09:09:27.148154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:50.777 [2024-11-06 09:09:27.148365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.777 [2024-11-06 09:09:27.151279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.777 [2024-11-06 09:09:27.151455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:50.777 [2024-11-06 09:09:27.151661] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:50.777 [2024-11-06 09:09:27.151872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:50.777 pt1 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.777 "name": "raid_bdev1", 00:16:50.777 "uuid": "e98e34e3-20ad-4ed7-9a67-0ff9555e4194", 00:16:50.777 "strip_size_kb": 64, 00:16:50.777 "state": "configuring", 00:16:50.777 "raid_level": "concat", 00:16:50.777 "superblock": true, 00:16:50.777 "num_base_bdevs": 4, 00:16:50.777 "num_base_bdevs_discovered": 1, 00:16:50.777 "num_base_bdevs_operational": 4, 00:16:50.777 "base_bdevs_list": [ 00:16:50.777 { 00:16:50.777 "name": "pt1", 00:16:50.777 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:50.777 "is_configured": true, 00:16:50.777 "data_offset": 2048, 00:16:50.777 "data_size": 63488 00:16:50.777 }, 00:16:50.777 { 00:16:50.777 "name": null, 00:16:50.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.777 "is_configured": false, 00:16:50.777 "data_offset": 2048, 00:16:50.777 "data_size": 63488 00:16:50.777 }, 00:16:50.777 { 00:16:50.777 "name": null, 00:16:50.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.777 "is_configured": false, 00:16:50.777 "data_offset": 2048, 00:16:50.777 "data_size": 63488 00:16:50.777 }, 00:16:50.777 { 00:16:50.777 "name": null, 00:16:50.777 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:50.777 "is_configured": false, 00:16:50.777 "data_offset": 2048, 00:16:50.777 "data_size": 63488 00:16:50.777 } 00:16:50.777 ] 00:16:50.777 }' 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.777 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.344 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:51.344 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.344 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.344 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.344 [2024-11-06 09:09:27.660367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.344 [2024-11-06 09:09:27.660458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.345 [2024-11-06 09:09:27.660487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:51.345 [2024-11-06 09:09:27.660534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.345 [2024-11-06 09:09:27.661129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.345 [2024-11-06 09:09:27.661170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.345 [2024-11-06 09:09:27.661269] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:51.345 [2024-11-06 09:09:27.661305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.345 pt2 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.345 [2024-11-06 09:09:27.668358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.345 "name": "raid_bdev1", 00:16:51.345 "uuid": "e98e34e3-20ad-4ed7-9a67-0ff9555e4194", 00:16:51.345 "strip_size_kb": 64, 00:16:51.345 "state": "configuring", 00:16:51.345 "raid_level": "concat", 00:16:51.345 "superblock": true, 00:16:51.345 "num_base_bdevs": 4, 00:16:51.345 "num_base_bdevs_discovered": 1, 00:16:51.345 "num_base_bdevs_operational": 4, 00:16:51.345 "base_bdevs_list": [ 00:16:51.345 { 00:16:51.345 "name": "pt1", 00:16:51.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.345 "is_configured": true, 00:16:51.345 "data_offset": 2048, 00:16:51.345 "data_size": 63488 00:16:51.345 }, 00:16:51.345 { 00:16:51.345 "name": null, 00:16:51.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.345 "is_configured": false, 00:16:51.345 "data_offset": 0, 00:16:51.345 "data_size": 63488 00:16:51.345 }, 00:16:51.345 { 00:16:51.345 "name": null, 00:16:51.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.345 "is_configured": false, 00:16:51.345 "data_offset": 2048, 00:16:51.345 "data_size": 63488 00:16:51.345 }, 00:16:51.345 { 00:16:51.345 "name": null, 00:16:51.345 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:51.345 "is_configured": false, 00:16:51.345 "data_offset": 2048, 00:16:51.345 "data_size": 63488 00:16:51.345 } 00:16:51.345 ] 00:16:51.345 }' 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.345 09:09:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.911 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:51.911 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:51.911 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.912 [2024-11-06 09:09:28.192557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:51.912 [2024-11-06 09:09:28.192657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.912 [2024-11-06 09:09:28.192687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:51.912 [2024-11-06 09:09:28.192701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.912 [2024-11-06 09:09:28.193323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.912 [2024-11-06 09:09:28.193357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:51.912 [2024-11-06 09:09:28.193462] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:51.912 [2024-11-06 09:09:28.193509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.912 pt2 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.912 [2024-11-06 09:09:28.200505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:51.912 [2024-11-06 09:09:28.200594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.912 [2024-11-06 09:09:28.200630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:51.912 [2024-11-06 09:09:28.200646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.912 [2024-11-06 09:09:28.201128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.912 [2024-11-06 09:09:28.201160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:51.912 [2024-11-06 09:09:28.201240] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:51.912 [2024-11-06 09:09:28.201267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:51.912 pt3 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.912 [2024-11-06 09:09:28.208458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:51.912 [2024-11-06 09:09:28.208733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.912 [2024-11-06 09:09:28.208771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:51.912 [2024-11-06 09:09:28.208786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.912 [2024-11-06 09:09:28.209292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.912 [2024-11-06 09:09:28.209328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:51.912 [2024-11-06 09:09:28.209410] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:51.912 [2024-11-06 09:09:28.209438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:51.912 [2024-11-06 09:09:28.209639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:51.912 [2024-11-06 09:09:28.209654] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:51.912 [2024-11-06 09:09:28.210002] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:51.912 [2024-11-06 09:09:28.210194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:51.912 [2024-11-06 09:09:28.210217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:51.912 [2024-11-06 09:09:28.210372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.912 pt4 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.912 "name": "raid_bdev1", 00:16:51.912 "uuid": "e98e34e3-20ad-4ed7-9a67-0ff9555e4194", 00:16:51.912 "strip_size_kb": 64, 00:16:51.912 "state": "online", 00:16:51.912 "raid_level": "concat", 00:16:51.912 "superblock": true, 00:16:51.912 "num_base_bdevs": 4, 00:16:51.912 "num_base_bdevs_discovered": 4, 00:16:51.912 "num_base_bdevs_operational": 4, 00:16:51.912 "base_bdevs_list": [ 00:16:51.912 { 00:16:51.912 "name": "pt1", 00:16:51.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:51.912 "is_configured": true, 00:16:51.912 "data_offset": 2048, 00:16:51.912 "data_size": 63488 00:16:51.912 }, 00:16:51.912 { 00:16:51.912 "name": "pt2", 00:16:51.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.912 "is_configured": true, 00:16:51.912 "data_offset": 2048, 00:16:51.912 "data_size": 63488 00:16:51.912 }, 00:16:51.912 { 00:16:51.912 "name": "pt3", 00:16:51.912 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.912 "is_configured": true, 00:16:51.912 "data_offset": 2048, 00:16:51.912 "data_size": 63488 00:16:51.912 }, 00:16:51.912 { 00:16:51.912 "name": "pt4", 00:16:51.912 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:51.912 "is_configured": true, 00:16:51.912 "data_offset": 2048, 00:16:51.912 "data_size": 63488 00:16:51.912 } 00:16:51.912 ] 00:16:51.912 }' 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.912 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.479 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:52.479 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:52.479 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:52.479 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:52.479 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:52.479 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:52.479 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:52.479 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.479 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.479 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:52.479 [2024-11-06 09:09:28.721086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.479 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.479 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:52.479 "name": "raid_bdev1", 00:16:52.479 "aliases": [ 00:16:52.479 "e98e34e3-20ad-4ed7-9a67-0ff9555e4194" 00:16:52.479 ], 00:16:52.479 "product_name": "Raid Volume", 00:16:52.479 "block_size": 512, 00:16:52.479 "num_blocks": 253952, 00:16:52.479 "uuid": "e98e34e3-20ad-4ed7-9a67-0ff9555e4194", 00:16:52.479 "assigned_rate_limits": { 00:16:52.479 "rw_ios_per_sec": 0, 00:16:52.479 "rw_mbytes_per_sec": 0, 00:16:52.479 "r_mbytes_per_sec": 0, 00:16:52.479 "w_mbytes_per_sec": 0 00:16:52.479 }, 00:16:52.479 "claimed": false, 00:16:52.479 "zoned": false, 00:16:52.479 "supported_io_types": { 00:16:52.479 "read": true, 00:16:52.479 "write": true, 00:16:52.479 "unmap": true, 00:16:52.479 "flush": true, 00:16:52.479 "reset": true, 00:16:52.479 "nvme_admin": false, 00:16:52.479 "nvme_io": false, 00:16:52.479 "nvme_io_md": false, 00:16:52.479 "write_zeroes": true, 00:16:52.479 "zcopy": false, 00:16:52.479 "get_zone_info": false, 00:16:52.479 "zone_management": false, 00:16:52.479 "zone_append": false, 00:16:52.479 "compare": false, 00:16:52.479 "compare_and_write": false, 00:16:52.479 "abort": false, 00:16:52.479 "seek_hole": false, 00:16:52.479 "seek_data": false, 00:16:52.479 "copy": false, 00:16:52.479 "nvme_iov_md": false 00:16:52.479 }, 00:16:52.479 "memory_domains": [ 00:16:52.479 { 00:16:52.479 "dma_device_id": "system", 00:16:52.479 "dma_device_type": 1 00:16:52.479 }, 00:16:52.479 { 00:16:52.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.479 "dma_device_type": 2 00:16:52.479 }, 00:16:52.479 { 00:16:52.479 "dma_device_id": "system", 00:16:52.479 "dma_device_type": 1 00:16:52.479 }, 00:16:52.479 { 00:16:52.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.479 "dma_device_type": 2 00:16:52.479 }, 00:16:52.479 { 00:16:52.479 "dma_device_id": "system", 00:16:52.479 "dma_device_type": 1 00:16:52.479 }, 00:16:52.479 { 00:16:52.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.479 "dma_device_type": 2 00:16:52.479 }, 00:16:52.479 { 00:16:52.479 "dma_device_id": "system", 00:16:52.479 "dma_device_type": 1 00:16:52.479 }, 00:16:52.479 { 00:16:52.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.479 "dma_device_type": 2 00:16:52.479 } 00:16:52.479 ], 00:16:52.479 "driver_specific": { 00:16:52.479 "raid": { 00:16:52.479 "uuid": "e98e34e3-20ad-4ed7-9a67-0ff9555e4194", 00:16:52.479 "strip_size_kb": 64, 00:16:52.479 "state": "online", 00:16:52.479 "raid_level": "concat", 00:16:52.479 "superblock": true, 00:16:52.479 "num_base_bdevs": 4, 00:16:52.479 "num_base_bdevs_discovered": 4, 00:16:52.479 "num_base_bdevs_operational": 4, 00:16:52.479 "base_bdevs_list": [ 00:16:52.479 { 00:16:52.479 "name": "pt1", 00:16:52.479 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:52.479 "is_configured": true, 00:16:52.479 "data_offset": 2048, 00:16:52.479 "data_size": 63488 00:16:52.479 }, 00:16:52.479 { 00:16:52.479 "name": "pt2", 00:16:52.479 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.479 "is_configured": true, 00:16:52.479 "data_offset": 2048, 00:16:52.479 "data_size": 63488 00:16:52.479 }, 00:16:52.479 { 00:16:52.479 "name": "pt3", 00:16:52.479 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.479 "is_configured": true, 00:16:52.479 "data_offset": 2048, 00:16:52.480 "data_size": 63488 00:16:52.480 }, 00:16:52.480 { 00:16:52.480 "name": "pt4", 00:16:52.480 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:52.480 "is_configured": true, 00:16:52.480 "data_offset": 2048, 00:16:52.480 "data_size": 63488 00:16:52.480 } 00:16:52.480 ] 00:16:52.480 } 00:16:52.480 } 00:16:52.480 }' 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:52.480 pt2 00:16:52.480 pt3 00:16:52.480 pt4' 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.480 09:09:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.480 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.480 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.480 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:52.480 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.480 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.480 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.738 [2024-11-06 09:09:29.113157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e98e34e3-20ad-4ed7-9a67-0ff9555e4194 '!=' e98e34e3-20ad-4ed7-9a67-0ff9555e4194 ']' 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72688 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 72688 ']' 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 72688 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72688 00:16:52.738 killing process with pid 72688 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72688' 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 72688 00:16:52.738 [2024-11-06 09:09:29.207104] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.738 09:09:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 72688 00:16:52.738 [2024-11-06 09:09:29.207208] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.738 [2024-11-06 09:09:29.207311] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.738 [2024-11-06 09:09:29.207326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:53.304 [2024-11-06 09:09:29.574460] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.239 09:09:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:54.239 00:16:54.239 real 0m6.063s 00:16:54.239 user 0m9.111s 00:16:54.239 sys 0m0.883s 00:16:54.239 ************************************ 00:16:54.239 END TEST raid_superblock_test 00:16:54.239 ************************************ 00:16:54.239 09:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:54.239 09:09:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.239 09:09:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:16:54.239 09:09:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:54.239 09:09:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:54.239 09:09:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.239 ************************************ 00:16:54.239 START TEST raid_read_error_test 00:16:54.239 ************************************ 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7tfShbPUOi 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72958 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72958 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 72958 ']' 00:16:54.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:54.239 09:09:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.497 [2024-11-06 09:09:30.848109] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:16:54.497 [2024-11-06 09:09:30.848517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72958 ] 00:16:54.754 [2024-11-06 09:09:31.047204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.754 [2024-11-06 09:09:31.232540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.012 [2024-11-06 09:09:31.436796] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.012 [2024-11-06 09:09:31.436854] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.602 BaseBdev1_malloc 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.602 true 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.602 [2024-11-06 09:09:31.892494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:55.602 [2024-11-06 09:09:31.892560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.602 [2024-11-06 09:09:31.892591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:55.602 [2024-11-06 09:09:31.892610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.602 [2024-11-06 09:09:31.895639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.602 [2024-11-06 09:09:31.895731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:55.602 BaseBdev1 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.602 BaseBdev2_malloc 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.602 true 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.602 [2024-11-06 09:09:31.953887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:55.602 [2024-11-06 09:09:31.953976] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.602 [2024-11-06 09:09:31.954002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:55.602 [2024-11-06 09:09:31.954020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.602 [2024-11-06 09:09:31.956791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.602 [2024-11-06 09:09:31.956874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:55.602 BaseBdev2 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.602 09:09:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.602 BaseBdev3_malloc 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.602 true 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.602 [2024-11-06 09:09:32.030957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:55.602 [2024-11-06 09:09:32.031053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.602 [2024-11-06 09:09:32.031078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:55.602 [2024-11-06 09:09:32.031095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.602 [2024-11-06 09:09:32.033897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.602 [2024-11-06 09:09:32.033948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:55.602 BaseBdev3 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.602 BaseBdev4_malloc 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.602 true 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.602 [2024-11-06 09:09:32.092772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:55.602 [2024-11-06 09:09:32.092879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.602 [2024-11-06 09:09:32.092913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:55.602 [2024-11-06 09:09:32.092931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.602 [2024-11-06 09:09:32.095854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.602 [2024-11-06 09:09:32.095937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:55.602 BaseBdev4 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:16:55.602 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.603 [2024-11-06 09:09:32.100860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.603 [2024-11-06 09:09:32.103452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.603 [2024-11-06 09:09:32.103559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:55.603 [2024-11-06 09:09:32.103656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:55.603 [2024-11-06 09:09:32.103988] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:55.603 [2024-11-06 09:09:32.104012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:16:55.603 [2024-11-06 09:09:32.104303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:16:55.603 [2024-11-06 09:09:32.104512] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:55.603 [2024-11-06 09:09:32.104531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:55.603 [2024-11-06 09:09:32.104759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.603 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.861 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.861 "name": "raid_bdev1", 00:16:55.861 "uuid": "17c9a8bf-9aa0-4d85-b6e2-eb1bd4d8bedf", 00:16:55.861 "strip_size_kb": 64, 00:16:55.861 "state": "online", 00:16:55.861 "raid_level": "concat", 00:16:55.861 "superblock": true, 00:16:55.861 "num_base_bdevs": 4, 00:16:55.861 "num_base_bdevs_discovered": 4, 00:16:55.861 "num_base_bdevs_operational": 4, 00:16:55.861 "base_bdevs_list": [ 00:16:55.861 { 00:16:55.861 "name": "BaseBdev1", 00:16:55.861 "uuid": "17bd6c92-b8d1-56c9-b7ba-96b0d9ee96da", 00:16:55.861 "is_configured": true, 00:16:55.861 "data_offset": 2048, 00:16:55.861 "data_size": 63488 00:16:55.861 }, 00:16:55.861 { 00:16:55.861 "name": "BaseBdev2", 00:16:55.861 "uuid": "fdddb73f-b1a8-558d-a111-8c4c6ce85f1b", 00:16:55.861 "is_configured": true, 00:16:55.861 "data_offset": 2048, 00:16:55.861 "data_size": 63488 00:16:55.861 }, 00:16:55.861 { 00:16:55.861 "name": "BaseBdev3", 00:16:55.861 "uuid": "71cb6f5e-03e2-529b-b047-7de6c84fb411", 00:16:55.862 "is_configured": true, 00:16:55.862 "data_offset": 2048, 00:16:55.862 "data_size": 63488 00:16:55.862 }, 00:16:55.862 { 00:16:55.862 "name": "BaseBdev4", 00:16:55.862 "uuid": "8b7daca3-db97-5c19-b406-18f2a3f64a53", 00:16:55.862 "is_configured": true, 00:16:55.862 "data_offset": 2048, 00:16:55.862 "data_size": 63488 00:16:55.862 } 00:16:55.862 ] 00:16:55.862 }' 00:16:55.862 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.862 09:09:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.120 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:56.120 09:09:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:56.378 [2024-11-06 09:09:32.750712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:57.313 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.314 "name": "raid_bdev1", 00:16:57.314 "uuid": "17c9a8bf-9aa0-4d85-b6e2-eb1bd4d8bedf", 00:16:57.314 "strip_size_kb": 64, 00:16:57.314 "state": "online", 00:16:57.314 "raid_level": "concat", 00:16:57.314 "superblock": true, 00:16:57.314 "num_base_bdevs": 4, 00:16:57.314 "num_base_bdevs_discovered": 4, 00:16:57.314 "num_base_bdevs_operational": 4, 00:16:57.314 "base_bdevs_list": [ 00:16:57.314 { 00:16:57.314 "name": "BaseBdev1", 00:16:57.314 "uuid": "17bd6c92-b8d1-56c9-b7ba-96b0d9ee96da", 00:16:57.314 "is_configured": true, 00:16:57.314 "data_offset": 2048, 00:16:57.314 "data_size": 63488 00:16:57.314 }, 00:16:57.314 { 00:16:57.314 "name": "BaseBdev2", 00:16:57.314 "uuid": "fdddb73f-b1a8-558d-a111-8c4c6ce85f1b", 00:16:57.314 "is_configured": true, 00:16:57.314 "data_offset": 2048, 00:16:57.314 "data_size": 63488 00:16:57.314 }, 00:16:57.314 { 00:16:57.314 "name": "BaseBdev3", 00:16:57.314 "uuid": "71cb6f5e-03e2-529b-b047-7de6c84fb411", 00:16:57.314 "is_configured": true, 00:16:57.314 "data_offset": 2048, 00:16:57.314 "data_size": 63488 00:16:57.314 }, 00:16:57.314 { 00:16:57.314 "name": "BaseBdev4", 00:16:57.314 "uuid": "8b7daca3-db97-5c19-b406-18f2a3f64a53", 00:16:57.314 "is_configured": true, 00:16:57.314 "data_offset": 2048, 00:16:57.314 "data_size": 63488 00:16:57.314 } 00:16:57.314 ] 00:16:57.314 }' 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.314 09:09:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.880 [2024-11-06 09:09:34.158724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.880 [2024-11-06 09:09:34.158766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.880 [2024-11-06 09:09:34.162206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.880 [2024-11-06 09:09:34.162293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.880 [2024-11-06 09:09:34.162359] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.880 [2024-11-06 09:09:34.162381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:57.880 { 00:16:57.880 "results": [ 00:16:57.880 { 00:16:57.880 "job": "raid_bdev1", 00:16:57.880 "core_mask": "0x1", 00:16:57.880 "workload": "randrw", 00:16:57.880 "percentage": 50, 00:16:57.880 "status": "finished", 00:16:57.880 "queue_depth": 1, 00:16:57.880 "io_size": 131072, 00:16:57.880 "runtime": 1.405195, 00:16:57.880 "iops": 10325.257348624213, 00:16:57.880 "mibps": 1290.6571685780266, 00:16:57.880 "io_failed": 1, 00:16:57.880 "io_timeout": 0, 00:16:57.880 "avg_latency_us": 135.00435311070737, 00:16:57.880 "min_latency_us": 38.167272727272724, 00:16:57.880 "max_latency_us": 1921.3963636363637 00:16:57.880 } 00:16:57.880 ], 00:16:57.880 "core_count": 1 00:16:57.880 } 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72958 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 72958 ']' 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 72958 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72958 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72958' 00:16:57.880 killing process with pid 72958 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 72958 00:16:57.880 [2024-11-06 09:09:34.194368] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:57.880 09:09:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 72958 00:16:58.138 [2024-11-06 09:09:34.506282] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.514 09:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7tfShbPUOi 00:16:59.514 09:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:59.514 09:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:59.514 09:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:16:59.514 09:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:16:59.514 09:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:59.514 09:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:16:59.514 09:09:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:16:59.514 00:16:59.514 real 0m4.965s 00:16:59.514 user 0m6.093s 00:16:59.514 sys 0m0.646s 00:16:59.514 ************************************ 00:16:59.514 END TEST raid_read_error_test 00:16:59.514 ************************************ 00:16:59.514 09:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:59.514 09:09:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.514 09:09:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:16:59.514 09:09:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:59.514 09:09:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:59.514 09:09:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:59.514 ************************************ 00:16:59.514 START TEST raid_write_error_test 00:16:59.514 ************************************ 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:16:59.514 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AlaQIP3REX 00:16:59.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73104 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73104 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 73104 ']' 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:59.515 09:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.515 [2024-11-06 09:09:35.835973] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:16:59.515 [2024-11-06 09:09:35.836148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73104 ] 00:16:59.515 [2024-11-06 09:09:36.029360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.774 [2024-11-06 09:09:36.193211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.032 [2024-11-06 09:09:36.433759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.033 [2024-11-06 09:09:36.433840] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.600 BaseBdev1_malloc 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.600 true 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.600 [2024-11-06 09:09:36.891443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:00.600 [2024-11-06 09:09:36.891716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.600 [2024-11-06 09:09:36.891758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:00.600 [2024-11-06 09:09:36.891778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.600 [2024-11-06 09:09:36.894756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.600 [2024-11-06 09:09:36.894984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:00.600 BaseBdev1 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.600 BaseBdev2_malloc 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.600 true 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.600 [2024-11-06 09:09:36.962682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:00.600 [2024-11-06 09:09:36.962753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.600 [2024-11-06 09:09:36.962779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:00.600 [2024-11-06 09:09:36.962796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.600 [2024-11-06 09:09:36.965673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.600 [2024-11-06 09:09:36.965727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:00.600 BaseBdev2 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.600 09:09:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.600 BaseBdev3_malloc 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.600 true 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.600 [2024-11-06 09:09:37.036234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:00.600 [2024-11-06 09:09:37.036314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.600 [2024-11-06 09:09:37.036341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:00.600 [2024-11-06 09:09:37.036357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.600 [2024-11-06 09:09:37.039254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.600 [2024-11-06 09:09:37.039459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:00.600 BaseBdev3 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.600 BaseBdev4_malloc 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.600 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.600 true 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.601 [2024-11-06 09:09:37.099006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:00.601 [2024-11-06 09:09:37.099094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.601 [2024-11-06 09:09:37.099123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:00.601 [2024-11-06 09:09:37.099141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.601 [2024-11-06 09:09:37.102076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.601 [2024-11-06 09:09:37.102132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:00.601 BaseBdev4 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.601 [2024-11-06 09:09:37.107125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.601 [2024-11-06 09:09:37.109710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:00.601 [2024-11-06 09:09:37.109847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:00.601 [2024-11-06 09:09:37.109970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:00.601 [2024-11-06 09:09:37.110292] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:00.601 [2024-11-06 09:09:37.110315] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:00.601 [2024-11-06 09:09:37.110672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:00.601 [2024-11-06 09:09:37.110903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:00.601 [2024-11-06 09:09:37.110923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:00.601 [2024-11-06 09:09:37.111164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.601 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.859 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.859 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.859 "name": "raid_bdev1", 00:17:00.859 "uuid": "6e791b7c-4109-4550-b166-9bc60b8eb6b0", 00:17:00.859 "strip_size_kb": 64, 00:17:00.859 "state": "online", 00:17:00.859 "raid_level": "concat", 00:17:00.859 "superblock": true, 00:17:00.859 "num_base_bdevs": 4, 00:17:00.859 "num_base_bdevs_discovered": 4, 00:17:00.859 "num_base_bdevs_operational": 4, 00:17:00.859 "base_bdevs_list": [ 00:17:00.859 { 00:17:00.859 "name": "BaseBdev1", 00:17:00.859 "uuid": "68338a90-96bb-578c-ae98-d0feefd0d1c0", 00:17:00.859 "is_configured": true, 00:17:00.859 "data_offset": 2048, 00:17:00.859 "data_size": 63488 00:17:00.859 }, 00:17:00.859 { 00:17:00.859 "name": "BaseBdev2", 00:17:00.859 "uuid": "3d064adb-c1ce-59e6-943c-367ae825be75", 00:17:00.859 "is_configured": true, 00:17:00.859 "data_offset": 2048, 00:17:00.859 "data_size": 63488 00:17:00.859 }, 00:17:00.859 { 00:17:00.859 "name": "BaseBdev3", 00:17:00.859 "uuid": "1edf2e7a-12d2-597b-8dda-c1a7091cb7df", 00:17:00.859 "is_configured": true, 00:17:00.859 "data_offset": 2048, 00:17:00.859 "data_size": 63488 00:17:00.859 }, 00:17:00.859 { 00:17:00.859 "name": "BaseBdev4", 00:17:00.859 "uuid": "285b2ad5-d238-520f-9c17-b478e5355636", 00:17:00.859 "is_configured": true, 00:17:00.859 "data_offset": 2048, 00:17:00.859 "data_size": 63488 00:17:00.859 } 00:17:00.859 ] 00:17:00.859 }' 00:17:00.859 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.859 09:09:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.117 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:01.117 09:09:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:01.376 [2024-11-06 09:09:37.748828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.311 "name": "raid_bdev1", 00:17:02.311 "uuid": "6e791b7c-4109-4550-b166-9bc60b8eb6b0", 00:17:02.311 "strip_size_kb": 64, 00:17:02.311 "state": "online", 00:17:02.311 "raid_level": "concat", 00:17:02.311 "superblock": true, 00:17:02.311 "num_base_bdevs": 4, 00:17:02.311 "num_base_bdevs_discovered": 4, 00:17:02.311 "num_base_bdevs_operational": 4, 00:17:02.311 "base_bdevs_list": [ 00:17:02.311 { 00:17:02.311 "name": "BaseBdev1", 00:17:02.311 "uuid": "68338a90-96bb-578c-ae98-d0feefd0d1c0", 00:17:02.311 "is_configured": true, 00:17:02.311 "data_offset": 2048, 00:17:02.311 "data_size": 63488 00:17:02.311 }, 00:17:02.311 { 00:17:02.311 "name": "BaseBdev2", 00:17:02.311 "uuid": "3d064adb-c1ce-59e6-943c-367ae825be75", 00:17:02.311 "is_configured": true, 00:17:02.311 "data_offset": 2048, 00:17:02.311 "data_size": 63488 00:17:02.311 }, 00:17:02.311 { 00:17:02.311 "name": "BaseBdev3", 00:17:02.311 "uuid": "1edf2e7a-12d2-597b-8dda-c1a7091cb7df", 00:17:02.311 "is_configured": true, 00:17:02.311 "data_offset": 2048, 00:17:02.311 "data_size": 63488 00:17:02.311 }, 00:17:02.311 { 00:17:02.311 "name": "BaseBdev4", 00:17:02.311 "uuid": "285b2ad5-d238-520f-9c17-b478e5355636", 00:17:02.311 "is_configured": true, 00:17:02.311 "data_offset": 2048, 00:17:02.311 "data_size": 63488 00:17:02.311 } 00:17:02.311 ] 00:17:02.311 }' 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.311 09:09:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.888 [2024-11-06 09:09:39.171272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.888 [2024-11-06 09:09:39.171450] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.888 [2024-11-06 09:09:39.174917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.888 [2024-11-06 09:09:39.175125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.888 [2024-11-06 09:09:39.175235] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.888 [2024-11-06 09:09:39.175431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:17:02.888 "results": [ 00:17:02.888 { 00:17:02.888 "job": "raid_bdev1", 00:17:02.888 "core_mask": "0x1", 00:17:02.888 "workload": "randrw", 00:17:02.888 "percentage": 50, 00:17:02.888 "status": "finished", 00:17:02.888 "queue_depth": 1, 00:17:02.888 "io_size": 131072, 00:17:02.888 "runtime": 1.41996, 00:17:02.888 "iops": 10249.584495337896, 00:17:02.888 "mibps": 1281.198061917237, 00:17:02.888 "io_failed": 1, 00:17:02.888 "io_timeout": 0, 00:17:02.888 "avg_latency_us": 136.23474794665998, 00:17:02.888 "min_latency_us": 39.33090909090909, 00:17:02.888 "max_latency_us": 1899.0545454545454 00:17:02.888 } 00:17:02.888 ], 00:17:02.888 "core_count": 1 00:17:02.888 } 00:17:02.888 te offline 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73104 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 73104 ']' 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 73104 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73104 00:17:02.888 killing process with pid 73104 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73104' 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 73104 00:17:02.888 [2024-11-06 09:09:39.209725] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.888 09:09:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 73104 00:17:03.147 [2024-11-06 09:09:39.499664] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.082 09:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AlaQIP3REX 00:17:04.082 09:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:04.082 09:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:04.082 09:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:17:04.082 09:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:17:04.082 09:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:04.082 09:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:17:04.082 09:09:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:17:04.082 00:17:04.082 real 0m4.880s 00:17:04.082 user 0m6.030s 00:17:04.082 sys 0m0.617s 00:17:04.082 09:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:04.082 ************************************ 00:17:04.082 END TEST raid_write_error_test 00:17:04.082 ************************************ 00:17:04.082 09:09:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.341 09:09:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:17:04.341 09:09:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:17:04.341 09:09:40 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:04.341 09:09:40 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:04.341 09:09:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.341 ************************************ 00:17:04.341 START TEST raid_state_function_test 00:17:04.341 ************************************ 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:04.341 Process raid pid: 73253 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73253 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73253' 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73253 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73253 ']' 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:04.341 09:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.341 [2024-11-06 09:09:40.763235] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:17:04.341 [2024-11-06 09:09:40.763681] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.600 [2024-11-06 09:09:40.944062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.600 [2024-11-06 09:09:41.076893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.859 [2024-11-06 09:09:41.287406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.859 [2024-11-06 09:09:41.287455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.460 [2024-11-06 09:09:41.725951] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.460 [2024-11-06 09:09:41.726030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.460 [2024-11-06 09:09:41.726048] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.460 [2024-11-06 09:09:41.726065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.460 [2024-11-06 09:09:41.726075] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.460 [2024-11-06 09:09:41.726090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.460 [2024-11-06 09:09:41.726100] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.460 [2024-11-06 09:09:41.726114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.460 "name": "Existed_Raid", 00:17:05.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.460 "strip_size_kb": 0, 00:17:05.460 "state": "configuring", 00:17:05.460 "raid_level": "raid1", 00:17:05.460 "superblock": false, 00:17:05.460 "num_base_bdevs": 4, 00:17:05.460 "num_base_bdevs_discovered": 0, 00:17:05.460 "num_base_bdevs_operational": 4, 00:17:05.460 "base_bdevs_list": [ 00:17:05.460 { 00:17:05.460 "name": "BaseBdev1", 00:17:05.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.460 "is_configured": false, 00:17:05.460 "data_offset": 0, 00:17:05.460 "data_size": 0 00:17:05.460 }, 00:17:05.460 { 00:17:05.460 "name": "BaseBdev2", 00:17:05.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.460 "is_configured": false, 00:17:05.460 "data_offset": 0, 00:17:05.460 "data_size": 0 00:17:05.460 }, 00:17:05.460 { 00:17:05.460 "name": "BaseBdev3", 00:17:05.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.460 "is_configured": false, 00:17:05.460 "data_offset": 0, 00:17:05.460 "data_size": 0 00:17:05.460 }, 00:17:05.460 { 00:17:05.460 "name": "BaseBdev4", 00:17:05.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.460 "is_configured": false, 00:17:05.460 "data_offset": 0, 00:17:05.460 "data_size": 0 00:17:05.460 } 00:17:05.460 ] 00:17:05.460 }' 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.460 09:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.719 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:05.720 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.720 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.720 [2024-11-06 09:09:42.238057] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:05.720 [2024-11-06 09:09:42.238105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:05.720 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.720 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.720 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.720 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.720 [2024-11-06 09:09:42.246027] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.720 [2024-11-06 09:09:42.246079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.720 [2024-11-06 09:09:42.246095] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.720 [2024-11-06 09:09:42.246110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.720 [2024-11-06 09:09:42.246120] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.720 [2024-11-06 09:09:42.246134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.720 [2024-11-06 09:09:42.246143] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.720 [2024-11-06 09:09:42.246158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.720 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.720 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:05.720 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.720 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.979 [2024-11-06 09:09:42.292877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.979 BaseBdev1 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.979 [ 00:17:05.979 { 00:17:05.979 "name": "BaseBdev1", 00:17:05.979 "aliases": [ 00:17:05.979 "78bbfac8-333d-45ed-9737-e762a9391e5b" 00:17:05.979 ], 00:17:05.979 "product_name": "Malloc disk", 00:17:05.979 "block_size": 512, 00:17:05.979 "num_blocks": 65536, 00:17:05.979 "uuid": "78bbfac8-333d-45ed-9737-e762a9391e5b", 00:17:05.979 "assigned_rate_limits": { 00:17:05.979 "rw_ios_per_sec": 0, 00:17:05.979 "rw_mbytes_per_sec": 0, 00:17:05.979 "r_mbytes_per_sec": 0, 00:17:05.979 "w_mbytes_per_sec": 0 00:17:05.979 }, 00:17:05.979 "claimed": true, 00:17:05.979 "claim_type": "exclusive_write", 00:17:05.979 "zoned": false, 00:17:05.979 "supported_io_types": { 00:17:05.979 "read": true, 00:17:05.979 "write": true, 00:17:05.979 "unmap": true, 00:17:05.979 "flush": true, 00:17:05.979 "reset": true, 00:17:05.979 "nvme_admin": false, 00:17:05.979 "nvme_io": false, 00:17:05.979 "nvme_io_md": false, 00:17:05.979 "write_zeroes": true, 00:17:05.979 "zcopy": true, 00:17:05.979 "get_zone_info": false, 00:17:05.979 "zone_management": false, 00:17:05.979 "zone_append": false, 00:17:05.979 "compare": false, 00:17:05.979 "compare_and_write": false, 00:17:05.979 "abort": true, 00:17:05.979 "seek_hole": false, 00:17:05.979 "seek_data": false, 00:17:05.979 "copy": true, 00:17:05.979 "nvme_iov_md": false 00:17:05.979 }, 00:17:05.979 "memory_domains": [ 00:17:05.979 { 00:17:05.979 "dma_device_id": "system", 00:17:05.979 "dma_device_type": 1 00:17:05.979 }, 00:17:05.979 { 00:17:05.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.979 "dma_device_type": 2 00:17:05.979 } 00:17:05.979 ], 00:17:05.979 "driver_specific": {} 00:17:05.979 } 00:17:05.979 ] 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.979 "name": "Existed_Raid", 00:17:05.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.979 "strip_size_kb": 0, 00:17:05.979 "state": "configuring", 00:17:05.979 "raid_level": "raid1", 00:17:05.979 "superblock": false, 00:17:05.979 "num_base_bdevs": 4, 00:17:05.979 "num_base_bdevs_discovered": 1, 00:17:05.979 "num_base_bdevs_operational": 4, 00:17:05.979 "base_bdevs_list": [ 00:17:05.979 { 00:17:05.979 "name": "BaseBdev1", 00:17:05.979 "uuid": "78bbfac8-333d-45ed-9737-e762a9391e5b", 00:17:05.979 "is_configured": true, 00:17:05.979 "data_offset": 0, 00:17:05.979 "data_size": 65536 00:17:05.979 }, 00:17:05.979 { 00:17:05.979 "name": "BaseBdev2", 00:17:05.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.979 "is_configured": false, 00:17:05.979 "data_offset": 0, 00:17:05.979 "data_size": 0 00:17:05.979 }, 00:17:05.979 { 00:17:05.979 "name": "BaseBdev3", 00:17:05.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.979 "is_configured": false, 00:17:05.979 "data_offset": 0, 00:17:05.979 "data_size": 0 00:17:05.979 }, 00:17:05.979 { 00:17:05.979 "name": "BaseBdev4", 00:17:05.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.979 "is_configured": false, 00:17:05.979 "data_offset": 0, 00:17:05.979 "data_size": 0 00:17:05.979 } 00:17:05.979 ] 00:17:05.979 }' 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.979 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.548 [2024-11-06 09:09:42.857136] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.548 [2024-11-06 09:09:42.857199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.548 [2024-11-06 09:09:42.865179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.548 [2024-11-06 09:09:42.868009] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.548 [2024-11-06 09:09:42.868100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.548 [2024-11-06 09:09:42.868132] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.548 [2024-11-06 09:09:42.868150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.548 [2024-11-06 09:09:42.868159] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:06.548 [2024-11-06 09:09:42.868172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.548 "name": "Existed_Raid", 00:17:06.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.548 "strip_size_kb": 0, 00:17:06.548 "state": "configuring", 00:17:06.548 "raid_level": "raid1", 00:17:06.548 "superblock": false, 00:17:06.548 "num_base_bdevs": 4, 00:17:06.548 "num_base_bdevs_discovered": 1, 00:17:06.548 "num_base_bdevs_operational": 4, 00:17:06.548 "base_bdevs_list": [ 00:17:06.548 { 00:17:06.548 "name": "BaseBdev1", 00:17:06.548 "uuid": "78bbfac8-333d-45ed-9737-e762a9391e5b", 00:17:06.548 "is_configured": true, 00:17:06.548 "data_offset": 0, 00:17:06.548 "data_size": 65536 00:17:06.548 }, 00:17:06.548 { 00:17:06.548 "name": "BaseBdev2", 00:17:06.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.548 "is_configured": false, 00:17:06.548 "data_offset": 0, 00:17:06.548 "data_size": 0 00:17:06.548 }, 00:17:06.548 { 00:17:06.548 "name": "BaseBdev3", 00:17:06.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.548 "is_configured": false, 00:17:06.548 "data_offset": 0, 00:17:06.548 "data_size": 0 00:17:06.548 }, 00:17:06.548 { 00:17:06.548 "name": "BaseBdev4", 00:17:06.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.548 "is_configured": false, 00:17:06.548 "data_offset": 0, 00:17:06.548 "data_size": 0 00:17:06.548 } 00:17:06.548 ] 00:17:06.548 }' 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.548 09:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.115 [2024-11-06 09:09:43.486361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.115 BaseBdev2 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.115 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.115 [ 00:17:07.115 { 00:17:07.115 "name": "BaseBdev2", 00:17:07.115 "aliases": [ 00:17:07.115 "5aa3e6ca-7f2a-49fa-a9c3-6cb19f569309" 00:17:07.115 ], 00:17:07.115 "product_name": "Malloc disk", 00:17:07.115 "block_size": 512, 00:17:07.116 "num_blocks": 65536, 00:17:07.116 "uuid": "5aa3e6ca-7f2a-49fa-a9c3-6cb19f569309", 00:17:07.116 "assigned_rate_limits": { 00:17:07.116 "rw_ios_per_sec": 0, 00:17:07.116 "rw_mbytes_per_sec": 0, 00:17:07.116 "r_mbytes_per_sec": 0, 00:17:07.116 "w_mbytes_per_sec": 0 00:17:07.116 }, 00:17:07.116 "claimed": true, 00:17:07.116 "claim_type": "exclusive_write", 00:17:07.116 "zoned": false, 00:17:07.116 "supported_io_types": { 00:17:07.116 "read": true, 00:17:07.116 "write": true, 00:17:07.116 "unmap": true, 00:17:07.116 "flush": true, 00:17:07.116 "reset": true, 00:17:07.116 "nvme_admin": false, 00:17:07.116 "nvme_io": false, 00:17:07.116 "nvme_io_md": false, 00:17:07.116 "write_zeroes": true, 00:17:07.116 "zcopy": true, 00:17:07.116 "get_zone_info": false, 00:17:07.116 "zone_management": false, 00:17:07.116 "zone_append": false, 00:17:07.116 "compare": false, 00:17:07.116 "compare_and_write": false, 00:17:07.116 "abort": true, 00:17:07.116 "seek_hole": false, 00:17:07.116 "seek_data": false, 00:17:07.116 "copy": true, 00:17:07.116 "nvme_iov_md": false 00:17:07.116 }, 00:17:07.116 "memory_domains": [ 00:17:07.116 { 00:17:07.116 "dma_device_id": "system", 00:17:07.116 "dma_device_type": 1 00:17:07.116 }, 00:17:07.116 { 00:17:07.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.116 "dma_device_type": 2 00:17:07.116 } 00:17:07.116 ], 00:17:07.116 "driver_specific": {} 00:17:07.116 } 00:17:07.116 ] 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.116 "name": "Existed_Raid", 00:17:07.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.116 "strip_size_kb": 0, 00:17:07.116 "state": "configuring", 00:17:07.116 "raid_level": "raid1", 00:17:07.116 "superblock": false, 00:17:07.116 "num_base_bdevs": 4, 00:17:07.116 "num_base_bdevs_discovered": 2, 00:17:07.116 "num_base_bdevs_operational": 4, 00:17:07.116 "base_bdevs_list": [ 00:17:07.116 { 00:17:07.116 "name": "BaseBdev1", 00:17:07.116 "uuid": "78bbfac8-333d-45ed-9737-e762a9391e5b", 00:17:07.116 "is_configured": true, 00:17:07.116 "data_offset": 0, 00:17:07.116 "data_size": 65536 00:17:07.116 }, 00:17:07.116 { 00:17:07.116 "name": "BaseBdev2", 00:17:07.116 "uuid": "5aa3e6ca-7f2a-49fa-a9c3-6cb19f569309", 00:17:07.116 "is_configured": true, 00:17:07.116 "data_offset": 0, 00:17:07.116 "data_size": 65536 00:17:07.116 }, 00:17:07.116 { 00:17:07.116 "name": "BaseBdev3", 00:17:07.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.116 "is_configured": false, 00:17:07.116 "data_offset": 0, 00:17:07.116 "data_size": 0 00:17:07.116 }, 00:17:07.116 { 00:17:07.116 "name": "BaseBdev4", 00:17:07.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.116 "is_configured": false, 00:17:07.116 "data_offset": 0, 00:17:07.116 "data_size": 0 00:17:07.116 } 00:17:07.116 ] 00:17:07.116 }' 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.116 09:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.682 [2024-11-06 09:09:44.108391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.682 BaseBdev3 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.682 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.682 [ 00:17:07.682 { 00:17:07.682 "name": "BaseBdev3", 00:17:07.682 "aliases": [ 00:17:07.682 "cdb84d7c-9524-4cfa-b072-2c90eb7759f5" 00:17:07.682 ], 00:17:07.682 "product_name": "Malloc disk", 00:17:07.682 "block_size": 512, 00:17:07.682 "num_blocks": 65536, 00:17:07.682 "uuid": "cdb84d7c-9524-4cfa-b072-2c90eb7759f5", 00:17:07.682 "assigned_rate_limits": { 00:17:07.682 "rw_ios_per_sec": 0, 00:17:07.682 "rw_mbytes_per_sec": 0, 00:17:07.683 "r_mbytes_per_sec": 0, 00:17:07.683 "w_mbytes_per_sec": 0 00:17:07.683 }, 00:17:07.683 "claimed": true, 00:17:07.683 "claim_type": "exclusive_write", 00:17:07.683 "zoned": false, 00:17:07.683 "supported_io_types": { 00:17:07.683 "read": true, 00:17:07.683 "write": true, 00:17:07.683 "unmap": true, 00:17:07.683 "flush": true, 00:17:07.683 "reset": true, 00:17:07.683 "nvme_admin": false, 00:17:07.683 "nvme_io": false, 00:17:07.683 "nvme_io_md": false, 00:17:07.683 "write_zeroes": true, 00:17:07.683 "zcopy": true, 00:17:07.683 "get_zone_info": false, 00:17:07.683 "zone_management": false, 00:17:07.683 "zone_append": false, 00:17:07.683 "compare": false, 00:17:07.683 "compare_and_write": false, 00:17:07.683 "abort": true, 00:17:07.683 "seek_hole": false, 00:17:07.683 "seek_data": false, 00:17:07.683 "copy": true, 00:17:07.683 "nvme_iov_md": false 00:17:07.683 }, 00:17:07.683 "memory_domains": [ 00:17:07.683 { 00:17:07.683 "dma_device_id": "system", 00:17:07.683 "dma_device_type": 1 00:17:07.683 }, 00:17:07.683 { 00:17:07.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.683 "dma_device_type": 2 00:17:07.683 } 00:17:07.683 ], 00:17:07.683 "driver_specific": {} 00:17:07.683 } 00:17:07.683 ] 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.683 "name": "Existed_Raid", 00:17:07.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.683 "strip_size_kb": 0, 00:17:07.683 "state": "configuring", 00:17:07.683 "raid_level": "raid1", 00:17:07.683 "superblock": false, 00:17:07.683 "num_base_bdevs": 4, 00:17:07.683 "num_base_bdevs_discovered": 3, 00:17:07.683 "num_base_bdevs_operational": 4, 00:17:07.683 "base_bdevs_list": [ 00:17:07.683 { 00:17:07.683 "name": "BaseBdev1", 00:17:07.683 "uuid": "78bbfac8-333d-45ed-9737-e762a9391e5b", 00:17:07.683 "is_configured": true, 00:17:07.683 "data_offset": 0, 00:17:07.683 "data_size": 65536 00:17:07.683 }, 00:17:07.683 { 00:17:07.683 "name": "BaseBdev2", 00:17:07.683 "uuid": "5aa3e6ca-7f2a-49fa-a9c3-6cb19f569309", 00:17:07.683 "is_configured": true, 00:17:07.683 "data_offset": 0, 00:17:07.683 "data_size": 65536 00:17:07.683 }, 00:17:07.683 { 00:17:07.683 "name": "BaseBdev3", 00:17:07.683 "uuid": "cdb84d7c-9524-4cfa-b072-2c90eb7759f5", 00:17:07.683 "is_configured": true, 00:17:07.683 "data_offset": 0, 00:17:07.683 "data_size": 65536 00:17:07.683 }, 00:17:07.683 { 00:17:07.683 "name": "BaseBdev4", 00:17:07.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.683 "is_configured": false, 00:17:07.683 "data_offset": 0, 00:17:07.683 "data_size": 0 00:17:07.683 } 00:17:07.683 ] 00:17:07.683 }' 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.683 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.251 [2024-11-06 09:09:44.697005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:08.251 [2024-11-06 09:09:44.697066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:08.251 [2024-11-06 09:09:44.697079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:08.251 [2024-11-06 09:09:44.697438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:08.251 [2024-11-06 09:09:44.697662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:08.251 [2024-11-06 09:09:44.697684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:08.251 [2024-11-06 09:09:44.698047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.251 BaseBdev4 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.251 [ 00:17:08.251 { 00:17:08.251 "name": "BaseBdev4", 00:17:08.251 "aliases": [ 00:17:08.251 "79ca738f-b088-4dc9-a789-10319055bd86" 00:17:08.251 ], 00:17:08.251 "product_name": "Malloc disk", 00:17:08.251 "block_size": 512, 00:17:08.251 "num_blocks": 65536, 00:17:08.251 "uuid": "79ca738f-b088-4dc9-a789-10319055bd86", 00:17:08.251 "assigned_rate_limits": { 00:17:08.251 "rw_ios_per_sec": 0, 00:17:08.251 "rw_mbytes_per_sec": 0, 00:17:08.251 "r_mbytes_per_sec": 0, 00:17:08.251 "w_mbytes_per_sec": 0 00:17:08.251 }, 00:17:08.251 "claimed": true, 00:17:08.251 "claim_type": "exclusive_write", 00:17:08.251 "zoned": false, 00:17:08.251 "supported_io_types": { 00:17:08.251 "read": true, 00:17:08.251 "write": true, 00:17:08.251 "unmap": true, 00:17:08.251 "flush": true, 00:17:08.251 "reset": true, 00:17:08.251 "nvme_admin": false, 00:17:08.251 "nvme_io": false, 00:17:08.251 "nvme_io_md": false, 00:17:08.251 "write_zeroes": true, 00:17:08.251 "zcopy": true, 00:17:08.251 "get_zone_info": false, 00:17:08.251 "zone_management": false, 00:17:08.251 "zone_append": false, 00:17:08.251 "compare": false, 00:17:08.251 "compare_and_write": false, 00:17:08.251 "abort": true, 00:17:08.251 "seek_hole": false, 00:17:08.251 "seek_data": false, 00:17:08.251 "copy": true, 00:17:08.251 "nvme_iov_md": false 00:17:08.251 }, 00:17:08.251 "memory_domains": [ 00:17:08.251 { 00:17:08.251 "dma_device_id": "system", 00:17:08.251 "dma_device_type": 1 00:17:08.251 }, 00:17:08.251 { 00:17:08.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.251 "dma_device_type": 2 00:17:08.251 } 00:17:08.251 ], 00:17:08.251 "driver_specific": {} 00:17:08.251 } 00:17:08.251 ] 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.251 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.510 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.510 "name": "Existed_Raid", 00:17:08.510 "uuid": "c7c46cf9-5639-4ea9-8dc4-a59af88aa59e", 00:17:08.510 "strip_size_kb": 0, 00:17:08.510 "state": "online", 00:17:08.510 "raid_level": "raid1", 00:17:08.510 "superblock": false, 00:17:08.510 "num_base_bdevs": 4, 00:17:08.510 "num_base_bdevs_discovered": 4, 00:17:08.510 "num_base_bdevs_operational": 4, 00:17:08.510 "base_bdevs_list": [ 00:17:08.510 { 00:17:08.510 "name": "BaseBdev1", 00:17:08.510 "uuid": "78bbfac8-333d-45ed-9737-e762a9391e5b", 00:17:08.510 "is_configured": true, 00:17:08.510 "data_offset": 0, 00:17:08.510 "data_size": 65536 00:17:08.510 }, 00:17:08.510 { 00:17:08.510 "name": "BaseBdev2", 00:17:08.510 "uuid": "5aa3e6ca-7f2a-49fa-a9c3-6cb19f569309", 00:17:08.510 "is_configured": true, 00:17:08.510 "data_offset": 0, 00:17:08.510 "data_size": 65536 00:17:08.510 }, 00:17:08.510 { 00:17:08.510 "name": "BaseBdev3", 00:17:08.510 "uuid": "cdb84d7c-9524-4cfa-b072-2c90eb7759f5", 00:17:08.510 "is_configured": true, 00:17:08.510 "data_offset": 0, 00:17:08.510 "data_size": 65536 00:17:08.510 }, 00:17:08.510 { 00:17:08.510 "name": "BaseBdev4", 00:17:08.510 "uuid": "79ca738f-b088-4dc9-a789-10319055bd86", 00:17:08.510 "is_configured": true, 00:17:08.510 "data_offset": 0, 00:17:08.510 "data_size": 65536 00:17:08.510 } 00:17:08.510 ] 00:17:08.510 }' 00:17:08.510 09:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.510 09:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.769 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:08.769 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:08.769 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:08.769 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:08.769 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:08.769 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:08.769 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:08.769 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:08.769 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.769 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.769 [2024-11-06 09:09:45.273651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.769 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.028 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:09.028 "name": "Existed_Raid", 00:17:09.028 "aliases": [ 00:17:09.028 "c7c46cf9-5639-4ea9-8dc4-a59af88aa59e" 00:17:09.028 ], 00:17:09.028 "product_name": "Raid Volume", 00:17:09.028 "block_size": 512, 00:17:09.028 "num_blocks": 65536, 00:17:09.028 "uuid": "c7c46cf9-5639-4ea9-8dc4-a59af88aa59e", 00:17:09.028 "assigned_rate_limits": { 00:17:09.028 "rw_ios_per_sec": 0, 00:17:09.028 "rw_mbytes_per_sec": 0, 00:17:09.028 "r_mbytes_per_sec": 0, 00:17:09.028 "w_mbytes_per_sec": 0 00:17:09.028 }, 00:17:09.028 "claimed": false, 00:17:09.028 "zoned": false, 00:17:09.028 "supported_io_types": { 00:17:09.028 "read": true, 00:17:09.028 "write": true, 00:17:09.028 "unmap": false, 00:17:09.028 "flush": false, 00:17:09.028 "reset": true, 00:17:09.028 "nvme_admin": false, 00:17:09.028 "nvme_io": false, 00:17:09.028 "nvme_io_md": false, 00:17:09.028 "write_zeroes": true, 00:17:09.028 "zcopy": false, 00:17:09.028 "get_zone_info": false, 00:17:09.028 "zone_management": false, 00:17:09.028 "zone_append": false, 00:17:09.028 "compare": false, 00:17:09.028 "compare_and_write": false, 00:17:09.028 "abort": false, 00:17:09.028 "seek_hole": false, 00:17:09.028 "seek_data": false, 00:17:09.028 "copy": false, 00:17:09.028 "nvme_iov_md": false 00:17:09.028 }, 00:17:09.028 "memory_domains": [ 00:17:09.028 { 00:17:09.028 "dma_device_id": "system", 00:17:09.028 "dma_device_type": 1 00:17:09.028 }, 00:17:09.028 { 00:17:09.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.028 "dma_device_type": 2 00:17:09.028 }, 00:17:09.028 { 00:17:09.028 "dma_device_id": "system", 00:17:09.028 "dma_device_type": 1 00:17:09.028 }, 00:17:09.028 { 00:17:09.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.028 "dma_device_type": 2 00:17:09.028 }, 00:17:09.028 { 00:17:09.028 "dma_device_id": "system", 00:17:09.028 "dma_device_type": 1 00:17:09.028 }, 00:17:09.028 { 00:17:09.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.028 "dma_device_type": 2 00:17:09.028 }, 00:17:09.028 { 00:17:09.028 "dma_device_id": "system", 00:17:09.028 "dma_device_type": 1 00:17:09.028 }, 00:17:09.028 { 00:17:09.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.028 "dma_device_type": 2 00:17:09.028 } 00:17:09.028 ], 00:17:09.028 "driver_specific": { 00:17:09.028 "raid": { 00:17:09.028 "uuid": "c7c46cf9-5639-4ea9-8dc4-a59af88aa59e", 00:17:09.028 "strip_size_kb": 0, 00:17:09.028 "state": "online", 00:17:09.028 "raid_level": "raid1", 00:17:09.028 "superblock": false, 00:17:09.028 "num_base_bdevs": 4, 00:17:09.028 "num_base_bdevs_discovered": 4, 00:17:09.028 "num_base_bdevs_operational": 4, 00:17:09.028 "base_bdevs_list": [ 00:17:09.028 { 00:17:09.028 "name": "BaseBdev1", 00:17:09.028 "uuid": "78bbfac8-333d-45ed-9737-e762a9391e5b", 00:17:09.028 "is_configured": true, 00:17:09.028 "data_offset": 0, 00:17:09.028 "data_size": 65536 00:17:09.028 }, 00:17:09.028 { 00:17:09.028 "name": "BaseBdev2", 00:17:09.028 "uuid": "5aa3e6ca-7f2a-49fa-a9c3-6cb19f569309", 00:17:09.028 "is_configured": true, 00:17:09.028 "data_offset": 0, 00:17:09.028 "data_size": 65536 00:17:09.028 }, 00:17:09.028 { 00:17:09.028 "name": "BaseBdev3", 00:17:09.028 "uuid": "cdb84d7c-9524-4cfa-b072-2c90eb7759f5", 00:17:09.028 "is_configured": true, 00:17:09.028 "data_offset": 0, 00:17:09.028 "data_size": 65536 00:17:09.028 }, 00:17:09.028 { 00:17:09.028 "name": "BaseBdev4", 00:17:09.028 "uuid": "79ca738f-b088-4dc9-a789-10319055bd86", 00:17:09.028 "is_configured": true, 00:17:09.028 "data_offset": 0, 00:17:09.028 "data_size": 65536 00:17:09.028 } 00:17:09.028 ] 00:17:09.028 } 00:17:09.028 } 00:17:09.028 }' 00:17:09.028 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.028 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:09.028 BaseBdev2 00:17:09.028 BaseBdev3 00:17:09.028 BaseBdev4' 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.029 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.287 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.287 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.287 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.288 [2024-11-06 09:09:45.681441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.288 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.546 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.546 "name": "Existed_Raid", 00:17:09.546 "uuid": "c7c46cf9-5639-4ea9-8dc4-a59af88aa59e", 00:17:09.546 "strip_size_kb": 0, 00:17:09.546 "state": "online", 00:17:09.546 "raid_level": "raid1", 00:17:09.546 "superblock": false, 00:17:09.546 "num_base_bdevs": 4, 00:17:09.546 "num_base_bdevs_discovered": 3, 00:17:09.546 "num_base_bdevs_operational": 3, 00:17:09.546 "base_bdevs_list": [ 00:17:09.546 { 00:17:09.546 "name": null, 00:17:09.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.546 "is_configured": false, 00:17:09.546 "data_offset": 0, 00:17:09.546 "data_size": 65536 00:17:09.546 }, 00:17:09.546 { 00:17:09.546 "name": "BaseBdev2", 00:17:09.546 "uuid": "5aa3e6ca-7f2a-49fa-a9c3-6cb19f569309", 00:17:09.546 "is_configured": true, 00:17:09.546 "data_offset": 0, 00:17:09.546 "data_size": 65536 00:17:09.546 }, 00:17:09.546 { 00:17:09.546 "name": "BaseBdev3", 00:17:09.546 "uuid": "cdb84d7c-9524-4cfa-b072-2c90eb7759f5", 00:17:09.546 "is_configured": true, 00:17:09.546 "data_offset": 0, 00:17:09.546 "data_size": 65536 00:17:09.546 }, 00:17:09.546 { 00:17:09.546 "name": "BaseBdev4", 00:17:09.546 "uuid": "79ca738f-b088-4dc9-a789-10319055bd86", 00:17:09.546 "is_configured": true, 00:17:09.546 "data_offset": 0, 00:17:09.546 "data_size": 65536 00:17:09.546 } 00:17:09.546 ] 00:17:09.546 }' 00:17:09.546 09:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.546 09:09:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.804 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:09.804 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:09.804 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.804 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.804 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.804 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:09.804 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.063 [2024-11-06 09:09:46.346845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.063 [2024-11-06 09:09:46.506951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:10.063 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 [2024-11-06 09:09:46.649531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:10.322 [2024-11-06 09:09:46.649655] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.322 [2024-11-06 09:09:46.738661] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.322 [2024-11-06 09:09:46.738936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.322 [2024-11-06 09:09:46.739148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 BaseBdev2 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.322 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.581 [ 00:17:10.581 { 00:17:10.581 "name": "BaseBdev2", 00:17:10.581 "aliases": [ 00:17:10.581 "feb40d02-c7e0-4d74-aeab-7a2150a742e5" 00:17:10.581 ], 00:17:10.581 "product_name": "Malloc disk", 00:17:10.581 "block_size": 512, 00:17:10.581 "num_blocks": 65536, 00:17:10.581 "uuid": "feb40d02-c7e0-4d74-aeab-7a2150a742e5", 00:17:10.581 "assigned_rate_limits": { 00:17:10.581 "rw_ios_per_sec": 0, 00:17:10.581 "rw_mbytes_per_sec": 0, 00:17:10.581 "r_mbytes_per_sec": 0, 00:17:10.581 "w_mbytes_per_sec": 0 00:17:10.581 }, 00:17:10.581 "claimed": false, 00:17:10.581 "zoned": false, 00:17:10.581 "supported_io_types": { 00:17:10.581 "read": true, 00:17:10.581 "write": true, 00:17:10.581 "unmap": true, 00:17:10.581 "flush": true, 00:17:10.581 "reset": true, 00:17:10.581 "nvme_admin": false, 00:17:10.581 "nvme_io": false, 00:17:10.582 "nvme_io_md": false, 00:17:10.582 "write_zeroes": true, 00:17:10.582 "zcopy": true, 00:17:10.582 "get_zone_info": false, 00:17:10.582 "zone_management": false, 00:17:10.582 "zone_append": false, 00:17:10.582 "compare": false, 00:17:10.582 "compare_and_write": false, 00:17:10.582 "abort": true, 00:17:10.582 "seek_hole": false, 00:17:10.582 "seek_data": false, 00:17:10.582 "copy": true, 00:17:10.582 "nvme_iov_md": false 00:17:10.582 }, 00:17:10.582 "memory_domains": [ 00:17:10.582 { 00:17:10.582 "dma_device_id": "system", 00:17:10.582 "dma_device_type": 1 00:17:10.582 }, 00:17:10.582 { 00:17:10.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.582 "dma_device_type": 2 00:17:10.582 } 00:17:10.582 ], 00:17:10.582 "driver_specific": {} 00:17:10.582 } 00:17:10.582 ] 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.582 BaseBdev3 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.582 [ 00:17:10.582 { 00:17:10.582 "name": "BaseBdev3", 00:17:10.582 "aliases": [ 00:17:10.582 "bbf88146-fecf-49da-94cd-da9c65af8037" 00:17:10.582 ], 00:17:10.582 "product_name": "Malloc disk", 00:17:10.582 "block_size": 512, 00:17:10.582 "num_blocks": 65536, 00:17:10.582 "uuid": "bbf88146-fecf-49da-94cd-da9c65af8037", 00:17:10.582 "assigned_rate_limits": { 00:17:10.582 "rw_ios_per_sec": 0, 00:17:10.582 "rw_mbytes_per_sec": 0, 00:17:10.582 "r_mbytes_per_sec": 0, 00:17:10.582 "w_mbytes_per_sec": 0 00:17:10.582 }, 00:17:10.582 "claimed": false, 00:17:10.582 "zoned": false, 00:17:10.582 "supported_io_types": { 00:17:10.582 "read": true, 00:17:10.582 "write": true, 00:17:10.582 "unmap": true, 00:17:10.582 "flush": true, 00:17:10.582 "reset": true, 00:17:10.582 "nvme_admin": false, 00:17:10.582 "nvme_io": false, 00:17:10.582 "nvme_io_md": false, 00:17:10.582 "write_zeroes": true, 00:17:10.582 "zcopy": true, 00:17:10.582 "get_zone_info": false, 00:17:10.582 "zone_management": false, 00:17:10.582 "zone_append": false, 00:17:10.582 "compare": false, 00:17:10.582 "compare_and_write": false, 00:17:10.582 "abort": true, 00:17:10.582 "seek_hole": false, 00:17:10.582 "seek_data": false, 00:17:10.582 "copy": true, 00:17:10.582 "nvme_iov_md": false 00:17:10.582 }, 00:17:10.582 "memory_domains": [ 00:17:10.582 { 00:17:10.582 "dma_device_id": "system", 00:17:10.582 "dma_device_type": 1 00:17:10.582 }, 00:17:10.582 { 00:17:10.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.582 "dma_device_type": 2 00:17:10.582 } 00:17:10.582 ], 00:17:10.582 "driver_specific": {} 00:17:10.582 } 00:17:10.582 ] 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.582 BaseBdev4 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.582 09:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.582 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.583 [ 00:17:10.583 { 00:17:10.583 "name": "BaseBdev4", 00:17:10.583 "aliases": [ 00:17:10.583 "129788fd-8ff4-4ba1-9e73-29b184659ec2" 00:17:10.583 ], 00:17:10.583 "product_name": "Malloc disk", 00:17:10.583 "block_size": 512, 00:17:10.583 "num_blocks": 65536, 00:17:10.583 "uuid": "129788fd-8ff4-4ba1-9e73-29b184659ec2", 00:17:10.583 "assigned_rate_limits": { 00:17:10.583 "rw_ios_per_sec": 0, 00:17:10.583 "rw_mbytes_per_sec": 0, 00:17:10.583 "r_mbytes_per_sec": 0, 00:17:10.583 "w_mbytes_per_sec": 0 00:17:10.583 }, 00:17:10.583 "claimed": false, 00:17:10.583 "zoned": false, 00:17:10.583 "supported_io_types": { 00:17:10.583 "read": true, 00:17:10.583 "write": true, 00:17:10.583 "unmap": true, 00:17:10.583 "flush": true, 00:17:10.583 "reset": true, 00:17:10.583 "nvme_admin": false, 00:17:10.583 "nvme_io": false, 00:17:10.583 "nvme_io_md": false, 00:17:10.583 "write_zeroes": true, 00:17:10.583 "zcopy": true, 00:17:10.583 "get_zone_info": false, 00:17:10.583 "zone_management": false, 00:17:10.583 "zone_append": false, 00:17:10.583 "compare": false, 00:17:10.583 "compare_and_write": false, 00:17:10.583 "abort": true, 00:17:10.583 "seek_hole": false, 00:17:10.583 "seek_data": false, 00:17:10.583 "copy": true, 00:17:10.583 "nvme_iov_md": false 00:17:10.583 }, 00:17:10.583 "memory_domains": [ 00:17:10.583 { 00:17:10.583 "dma_device_id": "system", 00:17:10.583 "dma_device_type": 1 00:17:10.583 }, 00:17:10.583 { 00:17:10.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.583 "dma_device_type": 2 00:17:10.583 } 00:17:10.583 ], 00:17:10.583 "driver_specific": {} 00:17:10.583 } 00:17:10.583 ] 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.583 [2024-11-06 09:09:47.037351] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:10.583 [2024-11-06 09:09:47.037415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:10.583 [2024-11-06 09:09:47.037446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.583 [2024-11-06 09:09:47.039986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:10.583 [2024-11-06 09:09:47.040053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.583 "name": "Existed_Raid", 00:17:10.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.583 "strip_size_kb": 0, 00:17:10.583 "state": "configuring", 00:17:10.583 "raid_level": "raid1", 00:17:10.583 "superblock": false, 00:17:10.583 "num_base_bdevs": 4, 00:17:10.583 "num_base_bdevs_discovered": 3, 00:17:10.583 "num_base_bdevs_operational": 4, 00:17:10.583 "base_bdevs_list": [ 00:17:10.583 { 00:17:10.583 "name": "BaseBdev1", 00:17:10.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.583 "is_configured": false, 00:17:10.583 "data_offset": 0, 00:17:10.583 "data_size": 0 00:17:10.583 }, 00:17:10.583 { 00:17:10.583 "name": "BaseBdev2", 00:17:10.583 "uuid": "feb40d02-c7e0-4d74-aeab-7a2150a742e5", 00:17:10.583 "is_configured": true, 00:17:10.583 "data_offset": 0, 00:17:10.583 "data_size": 65536 00:17:10.583 }, 00:17:10.583 { 00:17:10.583 "name": "BaseBdev3", 00:17:10.583 "uuid": "bbf88146-fecf-49da-94cd-da9c65af8037", 00:17:10.583 "is_configured": true, 00:17:10.583 "data_offset": 0, 00:17:10.583 "data_size": 65536 00:17:10.583 }, 00:17:10.583 { 00:17:10.583 "name": "BaseBdev4", 00:17:10.583 "uuid": "129788fd-8ff4-4ba1-9e73-29b184659ec2", 00:17:10.583 "is_configured": true, 00:17:10.583 "data_offset": 0, 00:17:10.583 "data_size": 65536 00:17:10.583 } 00:17:10.583 ] 00:17:10.583 }' 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.583 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.149 [2024-11-06 09:09:47.545538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.149 "name": "Existed_Raid", 00:17:11.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.149 "strip_size_kb": 0, 00:17:11.149 "state": "configuring", 00:17:11.149 "raid_level": "raid1", 00:17:11.149 "superblock": false, 00:17:11.149 "num_base_bdevs": 4, 00:17:11.149 "num_base_bdevs_discovered": 2, 00:17:11.149 "num_base_bdevs_operational": 4, 00:17:11.149 "base_bdevs_list": [ 00:17:11.149 { 00:17:11.149 "name": "BaseBdev1", 00:17:11.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.149 "is_configured": false, 00:17:11.149 "data_offset": 0, 00:17:11.149 "data_size": 0 00:17:11.149 }, 00:17:11.149 { 00:17:11.149 "name": null, 00:17:11.149 "uuid": "feb40d02-c7e0-4d74-aeab-7a2150a742e5", 00:17:11.149 "is_configured": false, 00:17:11.149 "data_offset": 0, 00:17:11.149 "data_size": 65536 00:17:11.149 }, 00:17:11.149 { 00:17:11.149 "name": "BaseBdev3", 00:17:11.149 "uuid": "bbf88146-fecf-49da-94cd-da9c65af8037", 00:17:11.149 "is_configured": true, 00:17:11.149 "data_offset": 0, 00:17:11.149 "data_size": 65536 00:17:11.149 }, 00:17:11.149 { 00:17:11.149 "name": "BaseBdev4", 00:17:11.149 "uuid": "129788fd-8ff4-4ba1-9e73-29b184659ec2", 00:17:11.149 "is_configured": true, 00:17:11.149 "data_offset": 0, 00:17:11.149 "data_size": 65536 00:17:11.149 } 00:17:11.149 ] 00:17:11.149 }' 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.149 09:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.715 [2024-11-06 09:09:48.167795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.715 BaseBdev1 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.715 [ 00:17:11.715 { 00:17:11.715 "name": "BaseBdev1", 00:17:11.715 "aliases": [ 00:17:11.715 "8d80e5c8-138e-41ae-968b-81a5578bff8a" 00:17:11.715 ], 00:17:11.715 "product_name": "Malloc disk", 00:17:11.715 "block_size": 512, 00:17:11.715 "num_blocks": 65536, 00:17:11.715 "uuid": "8d80e5c8-138e-41ae-968b-81a5578bff8a", 00:17:11.715 "assigned_rate_limits": { 00:17:11.715 "rw_ios_per_sec": 0, 00:17:11.715 "rw_mbytes_per_sec": 0, 00:17:11.715 "r_mbytes_per_sec": 0, 00:17:11.715 "w_mbytes_per_sec": 0 00:17:11.715 }, 00:17:11.715 "claimed": true, 00:17:11.715 "claim_type": "exclusive_write", 00:17:11.715 "zoned": false, 00:17:11.715 "supported_io_types": { 00:17:11.715 "read": true, 00:17:11.715 "write": true, 00:17:11.715 "unmap": true, 00:17:11.715 "flush": true, 00:17:11.715 "reset": true, 00:17:11.715 "nvme_admin": false, 00:17:11.715 "nvme_io": false, 00:17:11.715 "nvme_io_md": false, 00:17:11.715 "write_zeroes": true, 00:17:11.715 "zcopy": true, 00:17:11.715 "get_zone_info": false, 00:17:11.715 "zone_management": false, 00:17:11.715 "zone_append": false, 00:17:11.715 "compare": false, 00:17:11.715 "compare_and_write": false, 00:17:11.715 "abort": true, 00:17:11.715 "seek_hole": false, 00:17:11.715 "seek_data": false, 00:17:11.715 "copy": true, 00:17:11.715 "nvme_iov_md": false 00:17:11.715 }, 00:17:11.715 "memory_domains": [ 00:17:11.715 { 00:17:11.715 "dma_device_id": "system", 00:17:11.715 "dma_device_type": 1 00:17:11.715 }, 00:17:11.715 { 00:17:11.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.715 "dma_device_type": 2 00:17:11.715 } 00:17:11.715 ], 00:17:11.715 "driver_specific": {} 00:17:11.715 } 00:17:11.715 ] 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.715 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.715 "name": "Existed_Raid", 00:17:11.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.715 "strip_size_kb": 0, 00:17:11.715 "state": "configuring", 00:17:11.715 "raid_level": "raid1", 00:17:11.715 "superblock": false, 00:17:11.715 "num_base_bdevs": 4, 00:17:11.715 "num_base_bdevs_discovered": 3, 00:17:11.715 "num_base_bdevs_operational": 4, 00:17:11.715 "base_bdevs_list": [ 00:17:11.715 { 00:17:11.715 "name": "BaseBdev1", 00:17:11.715 "uuid": "8d80e5c8-138e-41ae-968b-81a5578bff8a", 00:17:11.715 "is_configured": true, 00:17:11.715 "data_offset": 0, 00:17:11.715 "data_size": 65536 00:17:11.715 }, 00:17:11.715 { 00:17:11.715 "name": null, 00:17:11.715 "uuid": "feb40d02-c7e0-4d74-aeab-7a2150a742e5", 00:17:11.716 "is_configured": false, 00:17:11.716 "data_offset": 0, 00:17:11.716 "data_size": 65536 00:17:11.716 }, 00:17:11.716 { 00:17:11.716 "name": "BaseBdev3", 00:17:11.716 "uuid": "bbf88146-fecf-49da-94cd-da9c65af8037", 00:17:11.716 "is_configured": true, 00:17:11.716 "data_offset": 0, 00:17:11.716 "data_size": 65536 00:17:11.716 }, 00:17:11.716 { 00:17:11.716 "name": "BaseBdev4", 00:17:11.716 "uuid": "129788fd-8ff4-4ba1-9e73-29b184659ec2", 00:17:11.716 "is_configured": true, 00:17:11.716 "data_offset": 0, 00:17:11.716 "data_size": 65536 00:17:11.716 } 00:17:11.716 ] 00:17:11.716 }' 00:17:11.716 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.716 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.282 [2024-11-06 09:09:48.780087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.282 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.283 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.283 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.283 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.283 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.541 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.541 "name": "Existed_Raid", 00:17:12.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.541 "strip_size_kb": 0, 00:17:12.541 "state": "configuring", 00:17:12.541 "raid_level": "raid1", 00:17:12.541 "superblock": false, 00:17:12.541 "num_base_bdevs": 4, 00:17:12.541 "num_base_bdevs_discovered": 2, 00:17:12.541 "num_base_bdevs_operational": 4, 00:17:12.541 "base_bdevs_list": [ 00:17:12.541 { 00:17:12.541 "name": "BaseBdev1", 00:17:12.541 "uuid": "8d80e5c8-138e-41ae-968b-81a5578bff8a", 00:17:12.541 "is_configured": true, 00:17:12.541 "data_offset": 0, 00:17:12.541 "data_size": 65536 00:17:12.541 }, 00:17:12.541 { 00:17:12.541 "name": null, 00:17:12.541 "uuid": "feb40d02-c7e0-4d74-aeab-7a2150a742e5", 00:17:12.541 "is_configured": false, 00:17:12.541 "data_offset": 0, 00:17:12.541 "data_size": 65536 00:17:12.541 }, 00:17:12.541 { 00:17:12.541 "name": null, 00:17:12.541 "uuid": "bbf88146-fecf-49da-94cd-da9c65af8037", 00:17:12.541 "is_configured": false, 00:17:12.541 "data_offset": 0, 00:17:12.541 "data_size": 65536 00:17:12.541 }, 00:17:12.541 { 00:17:12.541 "name": "BaseBdev4", 00:17:12.541 "uuid": "129788fd-8ff4-4ba1-9e73-29b184659ec2", 00:17:12.541 "is_configured": true, 00:17:12.541 "data_offset": 0, 00:17:12.541 "data_size": 65536 00:17:12.541 } 00:17:12.541 ] 00:17:12.541 }' 00:17:12.541 09:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.541 09:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.800 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:12.800 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.800 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.800 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.800 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.064 [2024-11-06 09:09:49.360236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.064 "name": "Existed_Raid", 00:17:13.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.064 "strip_size_kb": 0, 00:17:13.064 "state": "configuring", 00:17:13.064 "raid_level": "raid1", 00:17:13.064 "superblock": false, 00:17:13.064 "num_base_bdevs": 4, 00:17:13.064 "num_base_bdevs_discovered": 3, 00:17:13.064 "num_base_bdevs_operational": 4, 00:17:13.064 "base_bdevs_list": [ 00:17:13.064 { 00:17:13.064 "name": "BaseBdev1", 00:17:13.064 "uuid": "8d80e5c8-138e-41ae-968b-81a5578bff8a", 00:17:13.064 "is_configured": true, 00:17:13.064 "data_offset": 0, 00:17:13.064 "data_size": 65536 00:17:13.064 }, 00:17:13.064 { 00:17:13.064 "name": null, 00:17:13.064 "uuid": "feb40d02-c7e0-4d74-aeab-7a2150a742e5", 00:17:13.064 "is_configured": false, 00:17:13.064 "data_offset": 0, 00:17:13.064 "data_size": 65536 00:17:13.064 }, 00:17:13.064 { 00:17:13.064 "name": "BaseBdev3", 00:17:13.064 "uuid": "bbf88146-fecf-49da-94cd-da9c65af8037", 00:17:13.064 "is_configured": true, 00:17:13.064 "data_offset": 0, 00:17:13.064 "data_size": 65536 00:17:13.064 }, 00:17:13.064 { 00:17:13.064 "name": "BaseBdev4", 00:17:13.064 "uuid": "129788fd-8ff4-4ba1-9e73-29b184659ec2", 00:17:13.064 "is_configured": true, 00:17:13.064 "data_offset": 0, 00:17:13.064 "data_size": 65536 00:17:13.064 } 00:17:13.064 ] 00:17:13.064 }' 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.064 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.634 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.634 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.634 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.634 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:13.634 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.634 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:13.634 09:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:13.634 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.634 09:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.634 [2024-11-06 09:09:49.940469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.634 "name": "Existed_Raid", 00:17:13.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.634 "strip_size_kb": 0, 00:17:13.634 "state": "configuring", 00:17:13.634 "raid_level": "raid1", 00:17:13.634 "superblock": false, 00:17:13.634 "num_base_bdevs": 4, 00:17:13.634 "num_base_bdevs_discovered": 2, 00:17:13.634 "num_base_bdevs_operational": 4, 00:17:13.634 "base_bdevs_list": [ 00:17:13.634 { 00:17:13.634 "name": null, 00:17:13.634 "uuid": "8d80e5c8-138e-41ae-968b-81a5578bff8a", 00:17:13.634 "is_configured": false, 00:17:13.634 "data_offset": 0, 00:17:13.634 "data_size": 65536 00:17:13.634 }, 00:17:13.634 { 00:17:13.634 "name": null, 00:17:13.634 "uuid": "feb40d02-c7e0-4d74-aeab-7a2150a742e5", 00:17:13.634 "is_configured": false, 00:17:13.634 "data_offset": 0, 00:17:13.634 "data_size": 65536 00:17:13.634 }, 00:17:13.634 { 00:17:13.634 "name": "BaseBdev3", 00:17:13.634 "uuid": "bbf88146-fecf-49da-94cd-da9c65af8037", 00:17:13.634 "is_configured": true, 00:17:13.634 "data_offset": 0, 00:17:13.634 "data_size": 65536 00:17:13.634 }, 00:17:13.634 { 00:17:13.634 "name": "BaseBdev4", 00:17:13.634 "uuid": "129788fd-8ff4-4ba1-9e73-29b184659ec2", 00:17:13.634 "is_configured": true, 00:17:13.634 "data_offset": 0, 00:17:13.634 "data_size": 65536 00:17:13.634 } 00:17:13.634 ] 00:17:13.634 }' 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.634 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.200 [2024-11-06 09:09:50.594127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.200 "name": "Existed_Raid", 00:17:14.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.200 "strip_size_kb": 0, 00:17:14.200 "state": "configuring", 00:17:14.200 "raid_level": "raid1", 00:17:14.200 "superblock": false, 00:17:14.200 "num_base_bdevs": 4, 00:17:14.200 "num_base_bdevs_discovered": 3, 00:17:14.200 "num_base_bdevs_operational": 4, 00:17:14.200 "base_bdevs_list": [ 00:17:14.200 { 00:17:14.200 "name": null, 00:17:14.200 "uuid": "8d80e5c8-138e-41ae-968b-81a5578bff8a", 00:17:14.200 "is_configured": false, 00:17:14.200 "data_offset": 0, 00:17:14.200 "data_size": 65536 00:17:14.200 }, 00:17:14.200 { 00:17:14.200 "name": "BaseBdev2", 00:17:14.200 "uuid": "feb40d02-c7e0-4d74-aeab-7a2150a742e5", 00:17:14.200 "is_configured": true, 00:17:14.200 "data_offset": 0, 00:17:14.200 "data_size": 65536 00:17:14.200 }, 00:17:14.200 { 00:17:14.200 "name": "BaseBdev3", 00:17:14.200 "uuid": "bbf88146-fecf-49da-94cd-da9c65af8037", 00:17:14.200 "is_configured": true, 00:17:14.200 "data_offset": 0, 00:17:14.200 "data_size": 65536 00:17:14.200 }, 00:17:14.200 { 00:17:14.200 "name": "BaseBdev4", 00:17:14.200 "uuid": "129788fd-8ff4-4ba1-9e73-29b184659ec2", 00:17:14.200 "is_configured": true, 00:17:14.200 "data_offset": 0, 00:17:14.200 "data_size": 65536 00:17:14.200 } 00:17:14.200 ] 00:17:14.200 }' 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.200 09:09:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8d80e5c8-138e-41ae-968b-81a5578bff8a 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.766 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.766 [2024-11-06 09:09:51.236111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:14.766 [2024-11-06 09:09:51.236173] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:14.766 [2024-11-06 09:09:51.236189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:14.767 [2024-11-06 09:09:51.236530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:14.767 [2024-11-06 09:09:51.236740] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:14.767 [2024-11-06 09:09:51.236757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:14.767 [2024-11-06 09:09:51.237079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.767 NewBaseBdev 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.767 [ 00:17:14.767 { 00:17:14.767 "name": "NewBaseBdev", 00:17:14.767 "aliases": [ 00:17:14.767 "8d80e5c8-138e-41ae-968b-81a5578bff8a" 00:17:14.767 ], 00:17:14.767 "product_name": "Malloc disk", 00:17:14.767 "block_size": 512, 00:17:14.767 "num_blocks": 65536, 00:17:14.767 "uuid": "8d80e5c8-138e-41ae-968b-81a5578bff8a", 00:17:14.767 "assigned_rate_limits": { 00:17:14.767 "rw_ios_per_sec": 0, 00:17:14.767 "rw_mbytes_per_sec": 0, 00:17:14.767 "r_mbytes_per_sec": 0, 00:17:14.767 "w_mbytes_per_sec": 0 00:17:14.767 }, 00:17:14.767 "claimed": true, 00:17:14.767 "claim_type": "exclusive_write", 00:17:14.767 "zoned": false, 00:17:14.767 "supported_io_types": { 00:17:14.767 "read": true, 00:17:14.767 "write": true, 00:17:14.767 "unmap": true, 00:17:14.767 "flush": true, 00:17:14.767 "reset": true, 00:17:14.767 "nvme_admin": false, 00:17:14.767 "nvme_io": false, 00:17:14.767 "nvme_io_md": false, 00:17:14.767 "write_zeroes": true, 00:17:14.767 "zcopy": true, 00:17:14.767 "get_zone_info": false, 00:17:14.767 "zone_management": false, 00:17:14.767 "zone_append": false, 00:17:14.767 "compare": false, 00:17:14.767 "compare_and_write": false, 00:17:14.767 "abort": true, 00:17:14.767 "seek_hole": false, 00:17:14.767 "seek_data": false, 00:17:14.767 "copy": true, 00:17:14.767 "nvme_iov_md": false 00:17:14.767 }, 00:17:14.767 "memory_domains": [ 00:17:14.767 { 00:17:14.767 "dma_device_id": "system", 00:17:14.767 "dma_device_type": 1 00:17:14.767 }, 00:17:14.767 { 00:17:14.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.767 "dma_device_type": 2 00:17:14.767 } 00:17:14.767 ], 00:17:14.767 "driver_specific": {} 00:17:14.767 } 00:17:14.767 ] 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.767 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.025 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.025 "name": "Existed_Raid", 00:17:15.025 "uuid": "4da83bb6-43c6-4c99-8299-74f496d8a82e", 00:17:15.025 "strip_size_kb": 0, 00:17:15.025 "state": "online", 00:17:15.025 "raid_level": "raid1", 00:17:15.025 "superblock": false, 00:17:15.025 "num_base_bdevs": 4, 00:17:15.025 "num_base_bdevs_discovered": 4, 00:17:15.025 "num_base_bdevs_operational": 4, 00:17:15.025 "base_bdevs_list": [ 00:17:15.025 { 00:17:15.025 "name": "NewBaseBdev", 00:17:15.025 "uuid": "8d80e5c8-138e-41ae-968b-81a5578bff8a", 00:17:15.025 "is_configured": true, 00:17:15.025 "data_offset": 0, 00:17:15.025 "data_size": 65536 00:17:15.025 }, 00:17:15.025 { 00:17:15.025 "name": "BaseBdev2", 00:17:15.026 "uuid": "feb40d02-c7e0-4d74-aeab-7a2150a742e5", 00:17:15.026 "is_configured": true, 00:17:15.026 "data_offset": 0, 00:17:15.026 "data_size": 65536 00:17:15.026 }, 00:17:15.026 { 00:17:15.026 "name": "BaseBdev3", 00:17:15.026 "uuid": "bbf88146-fecf-49da-94cd-da9c65af8037", 00:17:15.026 "is_configured": true, 00:17:15.026 "data_offset": 0, 00:17:15.026 "data_size": 65536 00:17:15.026 }, 00:17:15.026 { 00:17:15.026 "name": "BaseBdev4", 00:17:15.026 "uuid": "129788fd-8ff4-4ba1-9e73-29b184659ec2", 00:17:15.026 "is_configured": true, 00:17:15.026 "data_offset": 0, 00:17:15.026 "data_size": 65536 00:17:15.026 } 00:17:15.026 ] 00:17:15.026 }' 00:17:15.026 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.026 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.283 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:15.283 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:15.283 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:15.283 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:15.283 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:15.283 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:15.283 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:15.283 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:15.283 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.283 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.283 [2024-11-06 09:09:51.780739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.283 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.541 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:15.541 "name": "Existed_Raid", 00:17:15.541 "aliases": [ 00:17:15.541 "4da83bb6-43c6-4c99-8299-74f496d8a82e" 00:17:15.541 ], 00:17:15.541 "product_name": "Raid Volume", 00:17:15.541 "block_size": 512, 00:17:15.541 "num_blocks": 65536, 00:17:15.541 "uuid": "4da83bb6-43c6-4c99-8299-74f496d8a82e", 00:17:15.541 "assigned_rate_limits": { 00:17:15.541 "rw_ios_per_sec": 0, 00:17:15.541 "rw_mbytes_per_sec": 0, 00:17:15.541 "r_mbytes_per_sec": 0, 00:17:15.541 "w_mbytes_per_sec": 0 00:17:15.541 }, 00:17:15.541 "claimed": false, 00:17:15.541 "zoned": false, 00:17:15.541 "supported_io_types": { 00:17:15.541 "read": true, 00:17:15.541 "write": true, 00:17:15.541 "unmap": false, 00:17:15.541 "flush": false, 00:17:15.541 "reset": true, 00:17:15.541 "nvme_admin": false, 00:17:15.541 "nvme_io": false, 00:17:15.541 "nvme_io_md": false, 00:17:15.541 "write_zeroes": true, 00:17:15.541 "zcopy": false, 00:17:15.541 "get_zone_info": false, 00:17:15.541 "zone_management": false, 00:17:15.541 "zone_append": false, 00:17:15.541 "compare": false, 00:17:15.541 "compare_and_write": false, 00:17:15.541 "abort": false, 00:17:15.541 "seek_hole": false, 00:17:15.541 "seek_data": false, 00:17:15.541 "copy": false, 00:17:15.541 "nvme_iov_md": false 00:17:15.541 }, 00:17:15.541 "memory_domains": [ 00:17:15.541 { 00:17:15.541 "dma_device_id": "system", 00:17:15.541 "dma_device_type": 1 00:17:15.541 }, 00:17:15.541 { 00:17:15.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.541 "dma_device_type": 2 00:17:15.541 }, 00:17:15.541 { 00:17:15.541 "dma_device_id": "system", 00:17:15.541 "dma_device_type": 1 00:17:15.541 }, 00:17:15.541 { 00:17:15.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.541 "dma_device_type": 2 00:17:15.541 }, 00:17:15.541 { 00:17:15.541 "dma_device_id": "system", 00:17:15.541 "dma_device_type": 1 00:17:15.541 }, 00:17:15.541 { 00:17:15.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.541 "dma_device_type": 2 00:17:15.541 }, 00:17:15.541 { 00:17:15.541 "dma_device_id": "system", 00:17:15.541 "dma_device_type": 1 00:17:15.541 }, 00:17:15.541 { 00:17:15.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.541 "dma_device_type": 2 00:17:15.541 } 00:17:15.541 ], 00:17:15.541 "driver_specific": { 00:17:15.541 "raid": { 00:17:15.541 "uuid": "4da83bb6-43c6-4c99-8299-74f496d8a82e", 00:17:15.541 "strip_size_kb": 0, 00:17:15.541 "state": "online", 00:17:15.541 "raid_level": "raid1", 00:17:15.541 "superblock": false, 00:17:15.541 "num_base_bdevs": 4, 00:17:15.541 "num_base_bdevs_discovered": 4, 00:17:15.541 "num_base_bdevs_operational": 4, 00:17:15.541 "base_bdevs_list": [ 00:17:15.541 { 00:17:15.541 "name": "NewBaseBdev", 00:17:15.541 "uuid": "8d80e5c8-138e-41ae-968b-81a5578bff8a", 00:17:15.541 "is_configured": true, 00:17:15.541 "data_offset": 0, 00:17:15.541 "data_size": 65536 00:17:15.541 }, 00:17:15.541 { 00:17:15.541 "name": "BaseBdev2", 00:17:15.541 "uuid": "feb40d02-c7e0-4d74-aeab-7a2150a742e5", 00:17:15.541 "is_configured": true, 00:17:15.541 "data_offset": 0, 00:17:15.541 "data_size": 65536 00:17:15.541 }, 00:17:15.541 { 00:17:15.541 "name": "BaseBdev3", 00:17:15.542 "uuid": "bbf88146-fecf-49da-94cd-da9c65af8037", 00:17:15.542 "is_configured": true, 00:17:15.542 "data_offset": 0, 00:17:15.542 "data_size": 65536 00:17:15.542 }, 00:17:15.542 { 00:17:15.542 "name": "BaseBdev4", 00:17:15.542 "uuid": "129788fd-8ff4-4ba1-9e73-29b184659ec2", 00:17:15.542 "is_configured": true, 00:17:15.542 "data_offset": 0, 00:17:15.542 "data_size": 65536 00:17:15.542 } 00:17:15.542 ] 00:17:15.542 } 00:17:15.542 } 00:17:15.542 }' 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:15.542 BaseBdev2 00:17:15.542 BaseBdev3 00:17:15.542 BaseBdev4' 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.542 09:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.542 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.542 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.542 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.542 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.542 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:15.542 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.542 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.542 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.542 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.800 [2024-11-06 09:09:52.152388] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.800 [2024-11-06 09:09:52.152420] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.800 [2024-11-06 09:09:52.152510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.800 [2024-11-06 09:09:52.152897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.800 [2024-11-06 09:09:52.152937] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73253 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73253 ']' 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73253 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:17:15.800 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:15.801 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73253 00:17:15.801 killing process with pid 73253 00:17:15.801 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:15.801 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:15.801 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73253' 00:17:15.801 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73253 00:17:15.801 [2024-11-06 09:09:52.190564] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:15.801 09:09:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73253 00:17:16.058 [2024-11-06 09:09:52.549940] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.431 ************************************ 00:17:17.431 END TEST raid_state_function_test 00:17:17.431 ************************************ 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:17.431 00:17:17.431 real 0m12.971s 00:17:17.431 user 0m21.470s 00:17:17.431 sys 0m1.851s 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.431 09:09:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:17:17.431 09:09:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:17.431 09:09:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:17.431 09:09:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.431 ************************************ 00:17:17.431 START TEST raid_state_function_test_sb 00:17:17.431 ************************************ 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:17.431 Process raid pid: 73932 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73932 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73932' 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73932 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 73932 ']' 00:17:17.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:17.431 09:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.431 [2024-11-06 09:09:53.797333] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:17:17.431 [2024-11-06 09:09:53.797502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.689 [2024-11-06 09:09:53.987928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.689 [2024-11-06 09:09:54.126337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.949 [2024-11-06 09:09:54.335505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.949 [2024-11-06 09:09:54.335755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.515 [2024-11-06 09:09:54.835474] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.515 [2024-11-06 09:09:54.835711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.515 [2024-11-06 09:09:54.835740] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.515 [2024-11-06 09:09:54.835777] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.515 [2024-11-06 09:09:54.835787] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:18.515 [2024-11-06 09:09:54.835801] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:18.515 [2024-11-06 09:09:54.835810] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:18.515 [2024-11-06 09:09:54.835842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.515 "name": "Existed_Raid", 00:17:18.515 "uuid": "2f713e6e-465a-4836-9248-181e64fb4641", 00:17:18.515 "strip_size_kb": 0, 00:17:18.515 "state": "configuring", 00:17:18.515 "raid_level": "raid1", 00:17:18.515 "superblock": true, 00:17:18.515 "num_base_bdevs": 4, 00:17:18.515 "num_base_bdevs_discovered": 0, 00:17:18.515 "num_base_bdevs_operational": 4, 00:17:18.515 "base_bdevs_list": [ 00:17:18.515 { 00:17:18.515 "name": "BaseBdev1", 00:17:18.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.515 "is_configured": false, 00:17:18.515 "data_offset": 0, 00:17:18.515 "data_size": 0 00:17:18.515 }, 00:17:18.515 { 00:17:18.515 "name": "BaseBdev2", 00:17:18.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.515 "is_configured": false, 00:17:18.515 "data_offset": 0, 00:17:18.515 "data_size": 0 00:17:18.515 }, 00:17:18.515 { 00:17:18.515 "name": "BaseBdev3", 00:17:18.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.515 "is_configured": false, 00:17:18.515 "data_offset": 0, 00:17:18.515 "data_size": 0 00:17:18.515 }, 00:17:18.515 { 00:17:18.515 "name": "BaseBdev4", 00:17:18.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.515 "is_configured": false, 00:17:18.515 "data_offset": 0, 00:17:18.515 "data_size": 0 00:17:18.515 } 00:17:18.515 ] 00:17:18.515 }' 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.515 09:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.082 [2024-11-06 09:09:55.363630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.082 [2024-11-06 09:09:55.363874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.082 [2024-11-06 09:09:55.371617] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.082 [2024-11-06 09:09:55.371835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.082 [2024-11-06 09:09:55.371863] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.082 [2024-11-06 09:09:55.371882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.082 [2024-11-06 09:09:55.371893] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.082 [2024-11-06 09:09:55.371914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.082 [2024-11-06 09:09:55.371923] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:19.082 [2024-11-06 09:09:55.371938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.082 [2024-11-06 09:09:55.419357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.082 BaseBdev1 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.082 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.082 [ 00:17:19.082 { 00:17:19.082 "name": "BaseBdev1", 00:17:19.082 "aliases": [ 00:17:19.082 "4df8313c-cf0a-42af-b4b3-65c1b11e6ac1" 00:17:19.082 ], 00:17:19.082 "product_name": "Malloc disk", 00:17:19.082 "block_size": 512, 00:17:19.082 "num_blocks": 65536, 00:17:19.082 "uuid": "4df8313c-cf0a-42af-b4b3-65c1b11e6ac1", 00:17:19.082 "assigned_rate_limits": { 00:17:19.082 "rw_ios_per_sec": 0, 00:17:19.082 "rw_mbytes_per_sec": 0, 00:17:19.082 "r_mbytes_per_sec": 0, 00:17:19.082 "w_mbytes_per_sec": 0 00:17:19.082 }, 00:17:19.082 "claimed": true, 00:17:19.082 "claim_type": "exclusive_write", 00:17:19.082 "zoned": false, 00:17:19.082 "supported_io_types": { 00:17:19.082 "read": true, 00:17:19.082 "write": true, 00:17:19.082 "unmap": true, 00:17:19.082 "flush": true, 00:17:19.082 "reset": true, 00:17:19.082 "nvme_admin": false, 00:17:19.082 "nvme_io": false, 00:17:19.082 "nvme_io_md": false, 00:17:19.082 "write_zeroes": true, 00:17:19.082 "zcopy": true, 00:17:19.082 "get_zone_info": false, 00:17:19.082 "zone_management": false, 00:17:19.082 "zone_append": false, 00:17:19.082 "compare": false, 00:17:19.082 "compare_and_write": false, 00:17:19.082 "abort": true, 00:17:19.082 "seek_hole": false, 00:17:19.082 "seek_data": false, 00:17:19.082 "copy": true, 00:17:19.082 "nvme_iov_md": false 00:17:19.082 }, 00:17:19.082 "memory_domains": [ 00:17:19.082 { 00:17:19.082 "dma_device_id": "system", 00:17:19.082 "dma_device_type": 1 00:17:19.082 }, 00:17:19.082 { 00:17:19.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.082 "dma_device_type": 2 00:17:19.082 } 00:17:19.082 ], 00:17:19.082 "driver_specific": {} 00:17:19.082 } 00:17:19.082 ] 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.083 "name": "Existed_Raid", 00:17:19.083 "uuid": "cda53323-a1e9-431b-b933-cb3f8b84ef17", 00:17:19.083 "strip_size_kb": 0, 00:17:19.083 "state": "configuring", 00:17:19.083 "raid_level": "raid1", 00:17:19.083 "superblock": true, 00:17:19.083 "num_base_bdevs": 4, 00:17:19.083 "num_base_bdevs_discovered": 1, 00:17:19.083 "num_base_bdevs_operational": 4, 00:17:19.083 "base_bdevs_list": [ 00:17:19.083 { 00:17:19.083 "name": "BaseBdev1", 00:17:19.083 "uuid": "4df8313c-cf0a-42af-b4b3-65c1b11e6ac1", 00:17:19.083 "is_configured": true, 00:17:19.083 "data_offset": 2048, 00:17:19.083 "data_size": 63488 00:17:19.083 }, 00:17:19.083 { 00:17:19.083 "name": "BaseBdev2", 00:17:19.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.083 "is_configured": false, 00:17:19.083 "data_offset": 0, 00:17:19.083 "data_size": 0 00:17:19.083 }, 00:17:19.083 { 00:17:19.083 "name": "BaseBdev3", 00:17:19.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.083 "is_configured": false, 00:17:19.083 "data_offset": 0, 00:17:19.083 "data_size": 0 00:17:19.083 }, 00:17:19.083 { 00:17:19.083 "name": "BaseBdev4", 00:17:19.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.083 "is_configured": false, 00:17:19.083 "data_offset": 0, 00:17:19.083 "data_size": 0 00:17:19.083 } 00:17:19.083 ] 00:17:19.083 }' 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.083 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.650 [2024-11-06 09:09:55.975584] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.650 [2024-11-06 09:09:55.975841] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.650 [2024-11-06 09:09:55.987685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.650 [2024-11-06 09:09:55.990564] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.650 [2024-11-06 09:09:55.990742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.650 [2024-11-06 09:09:55.990905] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.650 [2024-11-06 09:09:55.991071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.650 [2024-11-06 09:09:55.991222] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:19.650 [2024-11-06 09:09:55.991366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.650 09:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.650 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.650 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.650 "name": "Existed_Raid", 00:17:19.650 "uuid": "d21c3bc9-ee9c-4ec4-8c85-36b1e3444211", 00:17:19.650 "strip_size_kb": 0, 00:17:19.650 "state": "configuring", 00:17:19.650 "raid_level": "raid1", 00:17:19.650 "superblock": true, 00:17:19.650 "num_base_bdevs": 4, 00:17:19.650 "num_base_bdevs_discovered": 1, 00:17:19.650 "num_base_bdevs_operational": 4, 00:17:19.650 "base_bdevs_list": [ 00:17:19.650 { 00:17:19.650 "name": "BaseBdev1", 00:17:19.650 "uuid": "4df8313c-cf0a-42af-b4b3-65c1b11e6ac1", 00:17:19.650 "is_configured": true, 00:17:19.650 "data_offset": 2048, 00:17:19.650 "data_size": 63488 00:17:19.650 }, 00:17:19.650 { 00:17:19.650 "name": "BaseBdev2", 00:17:19.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.650 "is_configured": false, 00:17:19.650 "data_offset": 0, 00:17:19.650 "data_size": 0 00:17:19.650 }, 00:17:19.650 { 00:17:19.650 "name": "BaseBdev3", 00:17:19.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.650 "is_configured": false, 00:17:19.650 "data_offset": 0, 00:17:19.650 "data_size": 0 00:17:19.650 }, 00:17:19.650 { 00:17:19.650 "name": "BaseBdev4", 00:17:19.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.650 "is_configured": false, 00:17:19.650 "data_offset": 0, 00:17:19.650 "data_size": 0 00:17:19.650 } 00:17:19.650 ] 00:17:19.650 }' 00:17:19.650 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.650 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.241 [2024-11-06 09:09:56.565406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.241 BaseBdev2 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.241 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.241 [ 00:17:20.241 { 00:17:20.241 "name": "BaseBdev2", 00:17:20.241 "aliases": [ 00:17:20.241 "81d0b30f-c557-4cea-babb-70bedf7d8436" 00:17:20.241 ], 00:17:20.241 "product_name": "Malloc disk", 00:17:20.241 "block_size": 512, 00:17:20.241 "num_blocks": 65536, 00:17:20.241 "uuid": "81d0b30f-c557-4cea-babb-70bedf7d8436", 00:17:20.241 "assigned_rate_limits": { 00:17:20.241 "rw_ios_per_sec": 0, 00:17:20.241 "rw_mbytes_per_sec": 0, 00:17:20.241 "r_mbytes_per_sec": 0, 00:17:20.242 "w_mbytes_per_sec": 0 00:17:20.242 }, 00:17:20.242 "claimed": true, 00:17:20.242 "claim_type": "exclusive_write", 00:17:20.242 "zoned": false, 00:17:20.242 "supported_io_types": { 00:17:20.242 "read": true, 00:17:20.242 "write": true, 00:17:20.242 "unmap": true, 00:17:20.242 "flush": true, 00:17:20.242 "reset": true, 00:17:20.242 "nvme_admin": false, 00:17:20.242 "nvme_io": false, 00:17:20.242 "nvme_io_md": false, 00:17:20.242 "write_zeroes": true, 00:17:20.242 "zcopy": true, 00:17:20.242 "get_zone_info": false, 00:17:20.242 "zone_management": false, 00:17:20.242 "zone_append": false, 00:17:20.242 "compare": false, 00:17:20.242 "compare_and_write": false, 00:17:20.242 "abort": true, 00:17:20.242 "seek_hole": false, 00:17:20.242 "seek_data": false, 00:17:20.242 "copy": true, 00:17:20.242 "nvme_iov_md": false 00:17:20.242 }, 00:17:20.242 "memory_domains": [ 00:17:20.242 { 00:17:20.242 "dma_device_id": "system", 00:17:20.242 "dma_device_type": 1 00:17:20.242 }, 00:17:20.242 { 00:17:20.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.242 "dma_device_type": 2 00:17:20.242 } 00:17:20.242 ], 00:17:20.242 "driver_specific": {} 00:17:20.242 } 00:17:20.242 ] 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.242 "name": "Existed_Raid", 00:17:20.242 "uuid": "d21c3bc9-ee9c-4ec4-8c85-36b1e3444211", 00:17:20.242 "strip_size_kb": 0, 00:17:20.242 "state": "configuring", 00:17:20.242 "raid_level": "raid1", 00:17:20.242 "superblock": true, 00:17:20.242 "num_base_bdevs": 4, 00:17:20.242 "num_base_bdevs_discovered": 2, 00:17:20.242 "num_base_bdevs_operational": 4, 00:17:20.242 "base_bdevs_list": [ 00:17:20.242 { 00:17:20.242 "name": "BaseBdev1", 00:17:20.242 "uuid": "4df8313c-cf0a-42af-b4b3-65c1b11e6ac1", 00:17:20.242 "is_configured": true, 00:17:20.242 "data_offset": 2048, 00:17:20.242 "data_size": 63488 00:17:20.242 }, 00:17:20.242 { 00:17:20.242 "name": "BaseBdev2", 00:17:20.242 "uuid": "81d0b30f-c557-4cea-babb-70bedf7d8436", 00:17:20.242 "is_configured": true, 00:17:20.242 "data_offset": 2048, 00:17:20.242 "data_size": 63488 00:17:20.242 }, 00:17:20.242 { 00:17:20.242 "name": "BaseBdev3", 00:17:20.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.242 "is_configured": false, 00:17:20.242 "data_offset": 0, 00:17:20.242 "data_size": 0 00:17:20.242 }, 00:17:20.242 { 00:17:20.242 "name": "BaseBdev4", 00:17:20.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.242 "is_configured": false, 00:17:20.242 "data_offset": 0, 00:17:20.242 "data_size": 0 00:17:20.242 } 00:17:20.242 ] 00:17:20.242 }' 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.242 09:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.815 [2024-11-06 09:09:57.175066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.815 BaseBdev3 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.815 [ 00:17:20.815 { 00:17:20.815 "name": "BaseBdev3", 00:17:20.815 "aliases": [ 00:17:20.815 "d8b24818-3920-465d-9b8a-2b4f8876df00" 00:17:20.815 ], 00:17:20.815 "product_name": "Malloc disk", 00:17:20.815 "block_size": 512, 00:17:20.815 "num_blocks": 65536, 00:17:20.815 "uuid": "d8b24818-3920-465d-9b8a-2b4f8876df00", 00:17:20.815 "assigned_rate_limits": { 00:17:20.815 "rw_ios_per_sec": 0, 00:17:20.815 "rw_mbytes_per_sec": 0, 00:17:20.815 "r_mbytes_per_sec": 0, 00:17:20.815 "w_mbytes_per_sec": 0 00:17:20.815 }, 00:17:20.815 "claimed": true, 00:17:20.815 "claim_type": "exclusive_write", 00:17:20.815 "zoned": false, 00:17:20.815 "supported_io_types": { 00:17:20.815 "read": true, 00:17:20.815 "write": true, 00:17:20.815 "unmap": true, 00:17:20.815 "flush": true, 00:17:20.815 "reset": true, 00:17:20.815 "nvme_admin": false, 00:17:20.815 "nvme_io": false, 00:17:20.815 "nvme_io_md": false, 00:17:20.815 "write_zeroes": true, 00:17:20.815 "zcopy": true, 00:17:20.815 "get_zone_info": false, 00:17:20.815 "zone_management": false, 00:17:20.815 "zone_append": false, 00:17:20.815 "compare": false, 00:17:20.815 "compare_and_write": false, 00:17:20.815 "abort": true, 00:17:20.815 "seek_hole": false, 00:17:20.815 "seek_data": false, 00:17:20.815 "copy": true, 00:17:20.815 "nvme_iov_md": false 00:17:20.815 }, 00:17:20.815 "memory_domains": [ 00:17:20.815 { 00:17:20.815 "dma_device_id": "system", 00:17:20.815 "dma_device_type": 1 00:17:20.815 }, 00:17:20.815 { 00:17:20.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.815 "dma_device_type": 2 00:17:20.815 } 00:17:20.815 ], 00:17:20.815 "driver_specific": {} 00:17:20.815 } 00:17:20.815 ] 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.815 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.816 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.816 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.816 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.816 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.816 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.816 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.816 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.816 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.816 "name": "Existed_Raid", 00:17:20.816 "uuid": "d21c3bc9-ee9c-4ec4-8c85-36b1e3444211", 00:17:20.816 "strip_size_kb": 0, 00:17:20.816 "state": "configuring", 00:17:20.816 "raid_level": "raid1", 00:17:20.816 "superblock": true, 00:17:20.816 "num_base_bdevs": 4, 00:17:20.816 "num_base_bdevs_discovered": 3, 00:17:20.816 "num_base_bdevs_operational": 4, 00:17:20.816 "base_bdevs_list": [ 00:17:20.816 { 00:17:20.816 "name": "BaseBdev1", 00:17:20.816 "uuid": "4df8313c-cf0a-42af-b4b3-65c1b11e6ac1", 00:17:20.816 "is_configured": true, 00:17:20.816 "data_offset": 2048, 00:17:20.816 "data_size": 63488 00:17:20.816 }, 00:17:20.816 { 00:17:20.816 "name": "BaseBdev2", 00:17:20.816 "uuid": "81d0b30f-c557-4cea-babb-70bedf7d8436", 00:17:20.816 "is_configured": true, 00:17:20.816 "data_offset": 2048, 00:17:20.816 "data_size": 63488 00:17:20.816 }, 00:17:20.816 { 00:17:20.816 "name": "BaseBdev3", 00:17:20.816 "uuid": "d8b24818-3920-465d-9b8a-2b4f8876df00", 00:17:20.816 "is_configured": true, 00:17:20.816 "data_offset": 2048, 00:17:20.816 "data_size": 63488 00:17:20.816 }, 00:17:20.816 { 00:17:20.816 "name": "BaseBdev4", 00:17:20.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.816 "is_configured": false, 00:17:20.816 "data_offset": 0, 00:17:20.816 "data_size": 0 00:17:20.816 } 00:17:20.816 ] 00:17:20.816 }' 00:17:20.816 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.816 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.385 [2024-11-06 09:09:57.781387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:21.385 [2024-11-06 09:09:57.781733] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:21.385 [2024-11-06 09:09:57.781755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:21.385 BaseBdev4 00:17:21.385 [2024-11-06 09:09:57.782139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:21.385 [2024-11-06 09:09:57.782347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:21.385 [2024-11-06 09:09:57.782379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:21.385 [2024-11-06 09:09:57.782580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.385 [ 00:17:21.385 { 00:17:21.385 "name": "BaseBdev4", 00:17:21.385 "aliases": [ 00:17:21.385 "8769c839-ccdd-458d-864a-cbb041f13dd5" 00:17:21.385 ], 00:17:21.385 "product_name": "Malloc disk", 00:17:21.385 "block_size": 512, 00:17:21.385 "num_blocks": 65536, 00:17:21.385 "uuid": "8769c839-ccdd-458d-864a-cbb041f13dd5", 00:17:21.385 "assigned_rate_limits": { 00:17:21.385 "rw_ios_per_sec": 0, 00:17:21.385 "rw_mbytes_per_sec": 0, 00:17:21.385 "r_mbytes_per_sec": 0, 00:17:21.385 "w_mbytes_per_sec": 0 00:17:21.385 }, 00:17:21.385 "claimed": true, 00:17:21.385 "claim_type": "exclusive_write", 00:17:21.385 "zoned": false, 00:17:21.385 "supported_io_types": { 00:17:21.385 "read": true, 00:17:21.385 "write": true, 00:17:21.385 "unmap": true, 00:17:21.385 "flush": true, 00:17:21.385 "reset": true, 00:17:21.385 "nvme_admin": false, 00:17:21.385 "nvme_io": false, 00:17:21.385 "nvme_io_md": false, 00:17:21.385 "write_zeroes": true, 00:17:21.385 "zcopy": true, 00:17:21.385 "get_zone_info": false, 00:17:21.385 "zone_management": false, 00:17:21.385 "zone_append": false, 00:17:21.385 "compare": false, 00:17:21.385 "compare_and_write": false, 00:17:21.385 "abort": true, 00:17:21.385 "seek_hole": false, 00:17:21.385 "seek_data": false, 00:17:21.385 "copy": true, 00:17:21.385 "nvme_iov_md": false 00:17:21.385 }, 00:17:21.385 "memory_domains": [ 00:17:21.385 { 00:17:21.385 "dma_device_id": "system", 00:17:21.385 "dma_device_type": 1 00:17:21.385 }, 00:17:21.385 { 00:17:21.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.385 "dma_device_type": 2 00:17:21.385 } 00:17:21.385 ], 00:17:21.385 "driver_specific": {} 00:17:21.385 } 00:17:21.385 ] 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.385 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.385 "name": "Existed_Raid", 00:17:21.385 "uuid": "d21c3bc9-ee9c-4ec4-8c85-36b1e3444211", 00:17:21.385 "strip_size_kb": 0, 00:17:21.385 "state": "online", 00:17:21.385 "raid_level": "raid1", 00:17:21.385 "superblock": true, 00:17:21.385 "num_base_bdevs": 4, 00:17:21.385 "num_base_bdevs_discovered": 4, 00:17:21.385 "num_base_bdevs_operational": 4, 00:17:21.385 "base_bdevs_list": [ 00:17:21.385 { 00:17:21.385 "name": "BaseBdev1", 00:17:21.385 "uuid": "4df8313c-cf0a-42af-b4b3-65c1b11e6ac1", 00:17:21.385 "is_configured": true, 00:17:21.385 "data_offset": 2048, 00:17:21.385 "data_size": 63488 00:17:21.385 }, 00:17:21.385 { 00:17:21.385 "name": "BaseBdev2", 00:17:21.386 "uuid": "81d0b30f-c557-4cea-babb-70bedf7d8436", 00:17:21.386 "is_configured": true, 00:17:21.386 "data_offset": 2048, 00:17:21.386 "data_size": 63488 00:17:21.386 }, 00:17:21.386 { 00:17:21.386 "name": "BaseBdev3", 00:17:21.386 "uuid": "d8b24818-3920-465d-9b8a-2b4f8876df00", 00:17:21.386 "is_configured": true, 00:17:21.386 "data_offset": 2048, 00:17:21.386 "data_size": 63488 00:17:21.386 }, 00:17:21.386 { 00:17:21.386 "name": "BaseBdev4", 00:17:21.386 "uuid": "8769c839-ccdd-458d-864a-cbb041f13dd5", 00:17:21.386 "is_configured": true, 00:17:21.386 "data_offset": 2048, 00:17:21.386 "data_size": 63488 00:17:21.386 } 00:17:21.386 ] 00:17:21.386 }' 00:17:21.386 09:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.386 09:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.953 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:21.953 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:21.953 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:21.953 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:21.953 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:21.953 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:21.953 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:21.953 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:21.953 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.953 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.953 [2024-11-06 09:09:58.394206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.953 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.953 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:21.953 "name": "Existed_Raid", 00:17:21.953 "aliases": [ 00:17:21.953 "d21c3bc9-ee9c-4ec4-8c85-36b1e3444211" 00:17:21.953 ], 00:17:21.953 "product_name": "Raid Volume", 00:17:21.953 "block_size": 512, 00:17:21.953 "num_blocks": 63488, 00:17:21.953 "uuid": "d21c3bc9-ee9c-4ec4-8c85-36b1e3444211", 00:17:21.953 "assigned_rate_limits": { 00:17:21.953 "rw_ios_per_sec": 0, 00:17:21.953 "rw_mbytes_per_sec": 0, 00:17:21.953 "r_mbytes_per_sec": 0, 00:17:21.953 "w_mbytes_per_sec": 0 00:17:21.953 }, 00:17:21.953 "claimed": false, 00:17:21.953 "zoned": false, 00:17:21.953 "supported_io_types": { 00:17:21.953 "read": true, 00:17:21.953 "write": true, 00:17:21.954 "unmap": false, 00:17:21.954 "flush": false, 00:17:21.954 "reset": true, 00:17:21.954 "nvme_admin": false, 00:17:21.954 "nvme_io": false, 00:17:21.954 "nvme_io_md": false, 00:17:21.954 "write_zeroes": true, 00:17:21.954 "zcopy": false, 00:17:21.954 "get_zone_info": false, 00:17:21.954 "zone_management": false, 00:17:21.954 "zone_append": false, 00:17:21.954 "compare": false, 00:17:21.954 "compare_and_write": false, 00:17:21.954 "abort": false, 00:17:21.954 "seek_hole": false, 00:17:21.954 "seek_data": false, 00:17:21.954 "copy": false, 00:17:21.954 "nvme_iov_md": false 00:17:21.954 }, 00:17:21.954 "memory_domains": [ 00:17:21.954 { 00:17:21.954 "dma_device_id": "system", 00:17:21.954 "dma_device_type": 1 00:17:21.954 }, 00:17:21.954 { 00:17:21.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.954 "dma_device_type": 2 00:17:21.954 }, 00:17:21.954 { 00:17:21.954 "dma_device_id": "system", 00:17:21.954 "dma_device_type": 1 00:17:21.954 }, 00:17:21.954 { 00:17:21.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.954 "dma_device_type": 2 00:17:21.954 }, 00:17:21.954 { 00:17:21.954 "dma_device_id": "system", 00:17:21.954 "dma_device_type": 1 00:17:21.954 }, 00:17:21.954 { 00:17:21.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.954 "dma_device_type": 2 00:17:21.954 }, 00:17:21.954 { 00:17:21.954 "dma_device_id": "system", 00:17:21.954 "dma_device_type": 1 00:17:21.954 }, 00:17:21.954 { 00:17:21.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.954 "dma_device_type": 2 00:17:21.954 } 00:17:21.954 ], 00:17:21.954 "driver_specific": { 00:17:21.954 "raid": { 00:17:21.954 "uuid": "d21c3bc9-ee9c-4ec4-8c85-36b1e3444211", 00:17:21.954 "strip_size_kb": 0, 00:17:21.954 "state": "online", 00:17:21.954 "raid_level": "raid1", 00:17:21.954 "superblock": true, 00:17:21.954 "num_base_bdevs": 4, 00:17:21.954 "num_base_bdevs_discovered": 4, 00:17:21.954 "num_base_bdevs_operational": 4, 00:17:21.954 "base_bdevs_list": [ 00:17:21.954 { 00:17:21.954 "name": "BaseBdev1", 00:17:21.954 "uuid": "4df8313c-cf0a-42af-b4b3-65c1b11e6ac1", 00:17:21.954 "is_configured": true, 00:17:21.954 "data_offset": 2048, 00:17:21.954 "data_size": 63488 00:17:21.954 }, 00:17:21.954 { 00:17:21.954 "name": "BaseBdev2", 00:17:21.954 "uuid": "81d0b30f-c557-4cea-babb-70bedf7d8436", 00:17:21.954 "is_configured": true, 00:17:21.954 "data_offset": 2048, 00:17:21.954 "data_size": 63488 00:17:21.954 }, 00:17:21.954 { 00:17:21.954 "name": "BaseBdev3", 00:17:21.954 "uuid": "d8b24818-3920-465d-9b8a-2b4f8876df00", 00:17:21.954 "is_configured": true, 00:17:21.954 "data_offset": 2048, 00:17:21.954 "data_size": 63488 00:17:21.954 }, 00:17:21.954 { 00:17:21.954 "name": "BaseBdev4", 00:17:21.954 "uuid": "8769c839-ccdd-458d-864a-cbb041f13dd5", 00:17:21.954 "is_configured": true, 00:17:21.954 "data_offset": 2048, 00:17:21.954 "data_size": 63488 00:17:21.954 } 00:17:21.954 ] 00:17:21.954 } 00:17:21.954 } 00:17:21.954 }' 00:17:21.954 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:22.213 BaseBdev2 00:17:22.213 BaseBdev3 00:17:22.213 BaseBdev4' 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.213 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.472 [2024-11-06 09:09:58.774016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.472 "name": "Existed_Raid", 00:17:22.472 "uuid": "d21c3bc9-ee9c-4ec4-8c85-36b1e3444211", 00:17:22.472 "strip_size_kb": 0, 00:17:22.472 "state": "online", 00:17:22.472 "raid_level": "raid1", 00:17:22.472 "superblock": true, 00:17:22.472 "num_base_bdevs": 4, 00:17:22.472 "num_base_bdevs_discovered": 3, 00:17:22.472 "num_base_bdevs_operational": 3, 00:17:22.472 "base_bdevs_list": [ 00:17:22.472 { 00:17:22.472 "name": null, 00:17:22.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.472 "is_configured": false, 00:17:22.472 "data_offset": 0, 00:17:22.472 "data_size": 63488 00:17:22.472 }, 00:17:22.472 { 00:17:22.472 "name": "BaseBdev2", 00:17:22.472 "uuid": "81d0b30f-c557-4cea-babb-70bedf7d8436", 00:17:22.472 "is_configured": true, 00:17:22.472 "data_offset": 2048, 00:17:22.472 "data_size": 63488 00:17:22.472 }, 00:17:22.472 { 00:17:22.472 "name": "BaseBdev3", 00:17:22.472 "uuid": "d8b24818-3920-465d-9b8a-2b4f8876df00", 00:17:22.472 "is_configured": true, 00:17:22.472 "data_offset": 2048, 00:17:22.472 "data_size": 63488 00:17:22.472 }, 00:17:22.472 { 00:17:22.472 "name": "BaseBdev4", 00:17:22.472 "uuid": "8769c839-ccdd-458d-864a-cbb041f13dd5", 00:17:22.472 "is_configured": true, 00:17:22.472 "data_offset": 2048, 00:17:22.472 "data_size": 63488 00:17:22.472 } 00:17:22.472 ] 00:17:22.472 }' 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.472 09:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.039 [2024-11-06 09:09:59.445760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.039 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.298 [2024-11-06 09:09:59.589819] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.298 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.298 [2024-11-06 09:09:59.741853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:23.298 [2024-11-06 09:09:59.742006] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.298 [2024-11-06 09:09:59.831251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.559 [2024-11-06 09:09:59.831495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.559 [2024-11-06 09:09:59.831532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.559 BaseBdev2 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.559 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.559 [ 00:17:23.559 { 00:17:23.559 "name": "BaseBdev2", 00:17:23.559 "aliases": [ 00:17:23.559 "b3fd3aa4-fab1-46f4-84df-f88bc6fadce2" 00:17:23.559 ], 00:17:23.559 "product_name": "Malloc disk", 00:17:23.559 "block_size": 512, 00:17:23.559 "num_blocks": 65536, 00:17:23.559 "uuid": "b3fd3aa4-fab1-46f4-84df-f88bc6fadce2", 00:17:23.559 "assigned_rate_limits": { 00:17:23.559 "rw_ios_per_sec": 0, 00:17:23.559 "rw_mbytes_per_sec": 0, 00:17:23.559 "r_mbytes_per_sec": 0, 00:17:23.559 "w_mbytes_per_sec": 0 00:17:23.559 }, 00:17:23.560 "claimed": false, 00:17:23.560 "zoned": false, 00:17:23.560 "supported_io_types": { 00:17:23.560 "read": true, 00:17:23.560 "write": true, 00:17:23.560 "unmap": true, 00:17:23.560 "flush": true, 00:17:23.560 "reset": true, 00:17:23.560 "nvme_admin": false, 00:17:23.560 "nvme_io": false, 00:17:23.560 "nvme_io_md": false, 00:17:23.560 "write_zeroes": true, 00:17:23.560 "zcopy": true, 00:17:23.560 "get_zone_info": false, 00:17:23.560 "zone_management": false, 00:17:23.560 "zone_append": false, 00:17:23.560 "compare": false, 00:17:23.560 "compare_and_write": false, 00:17:23.560 "abort": true, 00:17:23.560 "seek_hole": false, 00:17:23.560 "seek_data": false, 00:17:23.560 "copy": true, 00:17:23.560 "nvme_iov_md": false 00:17:23.560 }, 00:17:23.560 "memory_domains": [ 00:17:23.560 { 00:17:23.560 "dma_device_id": "system", 00:17:23.560 "dma_device_type": 1 00:17:23.560 }, 00:17:23.560 { 00:17:23.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.560 "dma_device_type": 2 00:17:23.560 } 00:17:23.560 ], 00:17:23.560 "driver_specific": {} 00:17:23.560 } 00:17:23.560 ] 00:17:23.560 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.560 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:23.560 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:23.560 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:23.560 09:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:23.560 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.560 09:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.560 BaseBdev3 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.560 [ 00:17:23.560 { 00:17:23.560 "name": "BaseBdev3", 00:17:23.560 "aliases": [ 00:17:23.560 "06b18b95-f812-4798-ad8d-e6cc881326df" 00:17:23.560 ], 00:17:23.560 "product_name": "Malloc disk", 00:17:23.560 "block_size": 512, 00:17:23.560 "num_blocks": 65536, 00:17:23.560 "uuid": "06b18b95-f812-4798-ad8d-e6cc881326df", 00:17:23.560 "assigned_rate_limits": { 00:17:23.560 "rw_ios_per_sec": 0, 00:17:23.560 "rw_mbytes_per_sec": 0, 00:17:23.560 "r_mbytes_per_sec": 0, 00:17:23.560 "w_mbytes_per_sec": 0 00:17:23.560 }, 00:17:23.560 "claimed": false, 00:17:23.560 "zoned": false, 00:17:23.560 "supported_io_types": { 00:17:23.560 "read": true, 00:17:23.560 "write": true, 00:17:23.560 "unmap": true, 00:17:23.560 "flush": true, 00:17:23.560 "reset": true, 00:17:23.560 "nvme_admin": false, 00:17:23.560 "nvme_io": false, 00:17:23.560 "nvme_io_md": false, 00:17:23.560 "write_zeroes": true, 00:17:23.560 "zcopy": true, 00:17:23.560 "get_zone_info": false, 00:17:23.560 "zone_management": false, 00:17:23.560 "zone_append": false, 00:17:23.560 "compare": false, 00:17:23.560 "compare_and_write": false, 00:17:23.560 "abort": true, 00:17:23.560 "seek_hole": false, 00:17:23.560 "seek_data": false, 00:17:23.560 "copy": true, 00:17:23.560 "nvme_iov_md": false 00:17:23.560 }, 00:17:23.560 "memory_domains": [ 00:17:23.560 { 00:17:23.560 "dma_device_id": "system", 00:17:23.560 "dma_device_type": 1 00:17:23.560 }, 00:17:23.560 { 00:17:23.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.560 "dma_device_type": 2 00:17:23.560 } 00:17:23.560 ], 00:17:23.560 "driver_specific": {} 00:17:23.560 } 00:17:23.560 ] 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.560 BaseBdev4 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.560 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.819 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.819 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:23.819 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.819 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.819 [ 00:17:23.819 { 00:17:23.819 "name": "BaseBdev4", 00:17:23.819 "aliases": [ 00:17:23.819 "24ea906e-1e97-42bf-a63e-ff220e58000c" 00:17:23.819 ], 00:17:23.819 "product_name": "Malloc disk", 00:17:23.819 "block_size": 512, 00:17:23.819 "num_blocks": 65536, 00:17:23.819 "uuid": "24ea906e-1e97-42bf-a63e-ff220e58000c", 00:17:23.819 "assigned_rate_limits": { 00:17:23.819 "rw_ios_per_sec": 0, 00:17:23.820 "rw_mbytes_per_sec": 0, 00:17:23.820 "r_mbytes_per_sec": 0, 00:17:23.820 "w_mbytes_per_sec": 0 00:17:23.820 }, 00:17:23.820 "claimed": false, 00:17:23.820 "zoned": false, 00:17:23.820 "supported_io_types": { 00:17:23.820 "read": true, 00:17:23.820 "write": true, 00:17:23.820 "unmap": true, 00:17:23.820 "flush": true, 00:17:23.820 "reset": true, 00:17:23.820 "nvme_admin": false, 00:17:23.820 "nvme_io": false, 00:17:23.820 "nvme_io_md": false, 00:17:23.820 "write_zeroes": true, 00:17:23.820 "zcopy": true, 00:17:23.820 "get_zone_info": false, 00:17:23.820 "zone_management": false, 00:17:23.820 "zone_append": false, 00:17:23.820 "compare": false, 00:17:23.820 "compare_and_write": false, 00:17:23.820 "abort": true, 00:17:23.820 "seek_hole": false, 00:17:23.820 "seek_data": false, 00:17:23.820 "copy": true, 00:17:23.820 "nvme_iov_md": false 00:17:23.820 }, 00:17:23.820 "memory_domains": [ 00:17:23.820 { 00:17:23.820 "dma_device_id": "system", 00:17:23.820 "dma_device_type": 1 00:17:23.820 }, 00:17:23.820 { 00:17:23.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.820 "dma_device_type": 2 00:17:23.820 } 00:17:23.820 ], 00:17:23.820 "driver_specific": {} 00:17:23.820 } 00:17:23.820 ] 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.820 [2024-11-06 09:10:00.122261] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.820 [2024-11-06 09:10:00.122323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.820 [2024-11-06 09:10:00.122353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.820 [2024-11-06 09:10:00.124926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.820 [2024-11-06 09:10:00.125145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.820 "name": "Existed_Raid", 00:17:23.820 "uuid": "dcaf18a5-f5af-49c9-ade0-b1c07eff0167", 00:17:23.820 "strip_size_kb": 0, 00:17:23.820 "state": "configuring", 00:17:23.820 "raid_level": "raid1", 00:17:23.820 "superblock": true, 00:17:23.820 "num_base_bdevs": 4, 00:17:23.820 "num_base_bdevs_discovered": 3, 00:17:23.820 "num_base_bdevs_operational": 4, 00:17:23.820 "base_bdevs_list": [ 00:17:23.820 { 00:17:23.820 "name": "BaseBdev1", 00:17:23.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.820 "is_configured": false, 00:17:23.820 "data_offset": 0, 00:17:23.820 "data_size": 0 00:17:23.820 }, 00:17:23.820 { 00:17:23.820 "name": "BaseBdev2", 00:17:23.820 "uuid": "b3fd3aa4-fab1-46f4-84df-f88bc6fadce2", 00:17:23.820 "is_configured": true, 00:17:23.820 "data_offset": 2048, 00:17:23.820 "data_size": 63488 00:17:23.820 }, 00:17:23.820 { 00:17:23.820 "name": "BaseBdev3", 00:17:23.820 "uuid": "06b18b95-f812-4798-ad8d-e6cc881326df", 00:17:23.820 "is_configured": true, 00:17:23.820 "data_offset": 2048, 00:17:23.820 "data_size": 63488 00:17:23.820 }, 00:17:23.820 { 00:17:23.820 "name": "BaseBdev4", 00:17:23.820 "uuid": "24ea906e-1e97-42bf-a63e-ff220e58000c", 00:17:23.820 "is_configured": true, 00:17:23.820 "data_offset": 2048, 00:17:23.820 "data_size": 63488 00:17:23.820 } 00:17:23.820 ] 00:17:23.820 }' 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.820 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.392 [2024-11-06 09:10:00.682483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.392 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.392 "name": "Existed_Raid", 00:17:24.392 "uuid": "dcaf18a5-f5af-49c9-ade0-b1c07eff0167", 00:17:24.392 "strip_size_kb": 0, 00:17:24.393 "state": "configuring", 00:17:24.393 "raid_level": "raid1", 00:17:24.393 "superblock": true, 00:17:24.393 "num_base_bdevs": 4, 00:17:24.393 "num_base_bdevs_discovered": 2, 00:17:24.393 "num_base_bdevs_operational": 4, 00:17:24.393 "base_bdevs_list": [ 00:17:24.393 { 00:17:24.393 "name": "BaseBdev1", 00:17:24.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.393 "is_configured": false, 00:17:24.393 "data_offset": 0, 00:17:24.393 "data_size": 0 00:17:24.393 }, 00:17:24.393 { 00:17:24.393 "name": null, 00:17:24.393 "uuid": "b3fd3aa4-fab1-46f4-84df-f88bc6fadce2", 00:17:24.393 "is_configured": false, 00:17:24.393 "data_offset": 0, 00:17:24.393 "data_size": 63488 00:17:24.393 }, 00:17:24.393 { 00:17:24.393 "name": "BaseBdev3", 00:17:24.393 "uuid": "06b18b95-f812-4798-ad8d-e6cc881326df", 00:17:24.393 "is_configured": true, 00:17:24.393 "data_offset": 2048, 00:17:24.393 "data_size": 63488 00:17:24.393 }, 00:17:24.393 { 00:17:24.393 "name": "BaseBdev4", 00:17:24.393 "uuid": "24ea906e-1e97-42bf-a63e-ff220e58000c", 00:17:24.393 "is_configured": true, 00:17:24.393 "data_offset": 2048, 00:17:24.393 "data_size": 63488 00:17:24.393 } 00:17:24.393 ] 00:17:24.393 }' 00:17:24.393 09:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.393 09:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.962 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.962 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:24.962 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.962 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.962 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.962 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:24.962 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:24.962 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.962 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.962 [2024-11-06 09:10:01.309618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:24.963 BaseBdev1 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.963 [ 00:17:24.963 { 00:17:24.963 "name": "BaseBdev1", 00:17:24.963 "aliases": [ 00:17:24.963 "65f9d36b-a817-4c72-827c-a9798849434e" 00:17:24.963 ], 00:17:24.963 "product_name": "Malloc disk", 00:17:24.963 "block_size": 512, 00:17:24.963 "num_blocks": 65536, 00:17:24.963 "uuid": "65f9d36b-a817-4c72-827c-a9798849434e", 00:17:24.963 "assigned_rate_limits": { 00:17:24.963 "rw_ios_per_sec": 0, 00:17:24.963 "rw_mbytes_per_sec": 0, 00:17:24.963 "r_mbytes_per_sec": 0, 00:17:24.963 "w_mbytes_per_sec": 0 00:17:24.963 }, 00:17:24.963 "claimed": true, 00:17:24.963 "claim_type": "exclusive_write", 00:17:24.963 "zoned": false, 00:17:24.963 "supported_io_types": { 00:17:24.963 "read": true, 00:17:24.963 "write": true, 00:17:24.963 "unmap": true, 00:17:24.963 "flush": true, 00:17:24.963 "reset": true, 00:17:24.963 "nvme_admin": false, 00:17:24.963 "nvme_io": false, 00:17:24.963 "nvme_io_md": false, 00:17:24.963 "write_zeroes": true, 00:17:24.963 "zcopy": true, 00:17:24.963 "get_zone_info": false, 00:17:24.963 "zone_management": false, 00:17:24.963 "zone_append": false, 00:17:24.963 "compare": false, 00:17:24.963 "compare_and_write": false, 00:17:24.963 "abort": true, 00:17:24.963 "seek_hole": false, 00:17:24.963 "seek_data": false, 00:17:24.963 "copy": true, 00:17:24.963 "nvme_iov_md": false 00:17:24.963 }, 00:17:24.963 "memory_domains": [ 00:17:24.963 { 00:17:24.963 "dma_device_id": "system", 00:17:24.963 "dma_device_type": 1 00:17:24.963 }, 00:17:24.963 { 00:17:24.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.963 "dma_device_type": 2 00:17:24.963 } 00:17:24.963 ], 00:17:24.963 "driver_specific": {} 00:17:24.963 } 00:17:24.963 ] 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.963 "name": "Existed_Raid", 00:17:24.963 "uuid": "dcaf18a5-f5af-49c9-ade0-b1c07eff0167", 00:17:24.963 "strip_size_kb": 0, 00:17:24.963 "state": "configuring", 00:17:24.963 "raid_level": "raid1", 00:17:24.963 "superblock": true, 00:17:24.963 "num_base_bdevs": 4, 00:17:24.963 "num_base_bdevs_discovered": 3, 00:17:24.963 "num_base_bdevs_operational": 4, 00:17:24.963 "base_bdevs_list": [ 00:17:24.963 { 00:17:24.963 "name": "BaseBdev1", 00:17:24.963 "uuid": "65f9d36b-a817-4c72-827c-a9798849434e", 00:17:24.963 "is_configured": true, 00:17:24.963 "data_offset": 2048, 00:17:24.963 "data_size": 63488 00:17:24.963 }, 00:17:24.963 { 00:17:24.963 "name": null, 00:17:24.963 "uuid": "b3fd3aa4-fab1-46f4-84df-f88bc6fadce2", 00:17:24.963 "is_configured": false, 00:17:24.963 "data_offset": 0, 00:17:24.963 "data_size": 63488 00:17:24.963 }, 00:17:24.963 { 00:17:24.963 "name": "BaseBdev3", 00:17:24.963 "uuid": "06b18b95-f812-4798-ad8d-e6cc881326df", 00:17:24.963 "is_configured": true, 00:17:24.963 "data_offset": 2048, 00:17:24.963 "data_size": 63488 00:17:24.963 }, 00:17:24.963 { 00:17:24.963 "name": "BaseBdev4", 00:17:24.963 "uuid": "24ea906e-1e97-42bf-a63e-ff220e58000c", 00:17:24.963 "is_configured": true, 00:17:24.963 "data_offset": 2048, 00:17:24.963 "data_size": 63488 00:17:24.963 } 00:17:24.963 ] 00:17:24.963 }' 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.963 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.532 [2024-11-06 09:10:01.917933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.532 "name": "Existed_Raid", 00:17:25.532 "uuid": "dcaf18a5-f5af-49c9-ade0-b1c07eff0167", 00:17:25.532 "strip_size_kb": 0, 00:17:25.532 "state": "configuring", 00:17:25.532 "raid_level": "raid1", 00:17:25.532 "superblock": true, 00:17:25.532 "num_base_bdevs": 4, 00:17:25.532 "num_base_bdevs_discovered": 2, 00:17:25.532 "num_base_bdevs_operational": 4, 00:17:25.532 "base_bdevs_list": [ 00:17:25.532 { 00:17:25.532 "name": "BaseBdev1", 00:17:25.532 "uuid": "65f9d36b-a817-4c72-827c-a9798849434e", 00:17:25.532 "is_configured": true, 00:17:25.532 "data_offset": 2048, 00:17:25.532 "data_size": 63488 00:17:25.532 }, 00:17:25.532 { 00:17:25.532 "name": null, 00:17:25.532 "uuid": "b3fd3aa4-fab1-46f4-84df-f88bc6fadce2", 00:17:25.532 "is_configured": false, 00:17:25.532 "data_offset": 0, 00:17:25.532 "data_size": 63488 00:17:25.532 }, 00:17:25.532 { 00:17:25.532 "name": null, 00:17:25.532 "uuid": "06b18b95-f812-4798-ad8d-e6cc881326df", 00:17:25.532 "is_configured": false, 00:17:25.532 "data_offset": 0, 00:17:25.532 "data_size": 63488 00:17:25.532 }, 00:17:25.532 { 00:17:25.532 "name": "BaseBdev4", 00:17:25.532 "uuid": "24ea906e-1e97-42bf-a63e-ff220e58000c", 00:17:25.532 "is_configured": true, 00:17:25.532 "data_offset": 2048, 00:17:25.532 "data_size": 63488 00:17:25.532 } 00:17:25.532 ] 00:17:25.532 }' 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.532 09:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.101 [2024-11-06 09:10:02.506049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.101 "name": "Existed_Raid", 00:17:26.101 "uuid": "dcaf18a5-f5af-49c9-ade0-b1c07eff0167", 00:17:26.101 "strip_size_kb": 0, 00:17:26.101 "state": "configuring", 00:17:26.101 "raid_level": "raid1", 00:17:26.101 "superblock": true, 00:17:26.101 "num_base_bdevs": 4, 00:17:26.101 "num_base_bdevs_discovered": 3, 00:17:26.101 "num_base_bdevs_operational": 4, 00:17:26.101 "base_bdevs_list": [ 00:17:26.101 { 00:17:26.101 "name": "BaseBdev1", 00:17:26.101 "uuid": "65f9d36b-a817-4c72-827c-a9798849434e", 00:17:26.101 "is_configured": true, 00:17:26.101 "data_offset": 2048, 00:17:26.101 "data_size": 63488 00:17:26.101 }, 00:17:26.101 { 00:17:26.101 "name": null, 00:17:26.101 "uuid": "b3fd3aa4-fab1-46f4-84df-f88bc6fadce2", 00:17:26.101 "is_configured": false, 00:17:26.101 "data_offset": 0, 00:17:26.101 "data_size": 63488 00:17:26.101 }, 00:17:26.101 { 00:17:26.101 "name": "BaseBdev3", 00:17:26.101 "uuid": "06b18b95-f812-4798-ad8d-e6cc881326df", 00:17:26.101 "is_configured": true, 00:17:26.101 "data_offset": 2048, 00:17:26.101 "data_size": 63488 00:17:26.101 }, 00:17:26.101 { 00:17:26.101 "name": "BaseBdev4", 00:17:26.101 "uuid": "24ea906e-1e97-42bf-a63e-ff220e58000c", 00:17:26.101 "is_configured": true, 00:17:26.101 "data_offset": 2048, 00:17:26.101 "data_size": 63488 00:17:26.101 } 00:17:26.101 ] 00:17:26.101 }' 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.101 09:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.692 [2024-11-06 09:10:03.098275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.692 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.951 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.951 "name": "Existed_Raid", 00:17:26.951 "uuid": "dcaf18a5-f5af-49c9-ade0-b1c07eff0167", 00:17:26.951 "strip_size_kb": 0, 00:17:26.951 "state": "configuring", 00:17:26.951 "raid_level": "raid1", 00:17:26.951 "superblock": true, 00:17:26.951 "num_base_bdevs": 4, 00:17:26.951 "num_base_bdevs_discovered": 2, 00:17:26.951 "num_base_bdevs_operational": 4, 00:17:26.951 "base_bdevs_list": [ 00:17:26.951 { 00:17:26.951 "name": null, 00:17:26.951 "uuid": "65f9d36b-a817-4c72-827c-a9798849434e", 00:17:26.951 "is_configured": false, 00:17:26.951 "data_offset": 0, 00:17:26.951 "data_size": 63488 00:17:26.951 }, 00:17:26.951 { 00:17:26.951 "name": null, 00:17:26.951 "uuid": "b3fd3aa4-fab1-46f4-84df-f88bc6fadce2", 00:17:26.951 "is_configured": false, 00:17:26.951 "data_offset": 0, 00:17:26.951 "data_size": 63488 00:17:26.951 }, 00:17:26.951 { 00:17:26.951 "name": "BaseBdev3", 00:17:26.951 "uuid": "06b18b95-f812-4798-ad8d-e6cc881326df", 00:17:26.951 "is_configured": true, 00:17:26.951 "data_offset": 2048, 00:17:26.951 "data_size": 63488 00:17:26.951 }, 00:17:26.951 { 00:17:26.951 "name": "BaseBdev4", 00:17:26.951 "uuid": "24ea906e-1e97-42bf-a63e-ff220e58000c", 00:17:26.951 "is_configured": true, 00:17:26.951 "data_offset": 2048, 00:17:26.951 "data_size": 63488 00:17:26.951 } 00:17:26.951 ] 00:17:26.951 }' 00:17:26.951 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.951 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.209 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.209 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.209 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.209 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:27.209 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.467 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:27.467 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:27.467 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.468 [2024-11-06 09:10:03.758810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.468 "name": "Existed_Raid", 00:17:27.468 "uuid": "dcaf18a5-f5af-49c9-ade0-b1c07eff0167", 00:17:27.468 "strip_size_kb": 0, 00:17:27.468 "state": "configuring", 00:17:27.468 "raid_level": "raid1", 00:17:27.468 "superblock": true, 00:17:27.468 "num_base_bdevs": 4, 00:17:27.468 "num_base_bdevs_discovered": 3, 00:17:27.468 "num_base_bdevs_operational": 4, 00:17:27.468 "base_bdevs_list": [ 00:17:27.468 { 00:17:27.468 "name": null, 00:17:27.468 "uuid": "65f9d36b-a817-4c72-827c-a9798849434e", 00:17:27.468 "is_configured": false, 00:17:27.468 "data_offset": 0, 00:17:27.468 "data_size": 63488 00:17:27.468 }, 00:17:27.468 { 00:17:27.468 "name": "BaseBdev2", 00:17:27.468 "uuid": "b3fd3aa4-fab1-46f4-84df-f88bc6fadce2", 00:17:27.468 "is_configured": true, 00:17:27.468 "data_offset": 2048, 00:17:27.468 "data_size": 63488 00:17:27.468 }, 00:17:27.468 { 00:17:27.468 "name": "BaseBdev3", 00:17:27.468 "uuid": "06b18b95-f812-4798-ad8d-e6cc881326df", 00:17:27.468 "is_configured": true, 00:17:27.468 "data_offset": 2048, 00:17:27.468 "data_size": 63488 00:17:27.468 }, 00:17:27.468 { 00:17:27.468 "name": "BaseBdev4", 00:17:27.468 "uuid": "24ea906e-1e97-42bf-a63e-ff220e58000c", 00:17:27.468 "is_configured": true, 00:17:27.468 "data_offset": 2048, 00:17:27.468 "data_size": 63488 00:17:27.468 } 00:17:27.468 ] 00:17:27.468 }' 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.468 09:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 65f9d36b-a817-4c72-827c-a9798849434e 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.035 [2024-11-06 09:10:04.553269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:28.035 [2024-11-06 09:10:04.553918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:28.035 [2024-11-06 09:10:04.553951] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:28.035 NewBaseBdev 00:17:28.035 [2024-11-06 09:10:04.554293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:28.035 [2024-11-06 09:10:04.554557] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:28.035 [2024-11-06 09:10:04.554583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:28.035 [2024-11-06 09:10:04.554795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.035 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.295 [ 00:17:28.295 { 00:17:28.295 "name": "NewBaseBdev", 00:17:28.295 "aliases": [ 00:17:28.295 "65f9d36b-a817-4c72-827c-a9798849434e" 00:17:28.295 ], 00:17:28.295 "product_name": "Malloc disk", 00:17:28.295 "block_size": 512, 00:17:28.295 "num_blocks": 65536, 00:17:28.295 "uuid": "65f9d36b-a817-4c72-827c-a9798849434e", 00:17:28.295 "assigned_rate_limits": { 00:17:28.295 "rw_ios_per_sec": 0, 00:17:28.295 "rw_mbytes_per_sec": 0, 00:17:28.295 "r_mbytes_per_sec": 0, 00:17:28.295 "w_mbytes_per_sec": 0 00:17:28.295 }, 00:17:28.295 "claimed": true, 00:17:28.295 "claim_type": "exclusive_write", 00:17:28.295 "zoned": false, 00:17:28.295 "supported_io_types": { 00:17:28.295 "read": true, 00:17:28.295 "write": true, 00:17:28.295 "unmap": true, 00:17:28.295 "flush": true, 00:17:28.295 "reset": true, 00:17:28.295 "nvme_admin": false, 00:17:28.295 "nvme_io": false, 00:17:28.295 "nvme_io_md": false, 00:17:28.295 "write_zeroes": true, 00:17:28.295 "zcopy": true, 00:17:28.295 "get_zone_info": false, 00:17:28.295 "zone_management": false, 00:17:28.295 "zone_append": false, 00:17:28.295 "compare": false, 00:17:28.295 "compare_and_write": false, 00:17:28.295 "abort": true, 00:17:28.295 "seek_hole": false, 00:17:28.295 "seek_data": false, 00:17:28.295 "copy": true, 00:17:28.295 "nvme_iov_md": false 00:17:28.295 }, 00:17:28.295 "memory_domains": [ 00:17:28.295 { 00:17:28.295 "dma_device_id": "system", 00:17:28.295 "dma_device_type": 1 00:17:28.295 }, 00:17:28.295 { 00:17:28.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.295 "dma_device_type": 2 00:17:28.295 } 00:17:28.295 ], 00:17:28.295 "driver_specific": {} 00:17:28.295 } 00:17:28.295 ] 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.295 "name": "Existed_Raid", 00:17:28.295 "uuid": "dcaf18a5-f5af-49c9-ade0-b1c07eff0167", 00:17:28.295 "strip_size_kb": 0, 00:17:28.295 "state": "online", 00:17:28.295 "raid_level": "raid1", 00:17:28.295 "superblock": true, 00:17:28.295 "num_base_bdevs": 4, 00:17:28.295 "num_base_bdevs_discovered": 4, 00:17:28.295 "num_base_bdevs_operational": 4, 00:17:28.295 "base_bdevs_list": [ 00:17:28.295 { 00:17:28.295 "name": "NewBaseBdev", 00:17:28.295 "uuid": "65f9d36b-a817-4c72-827c-a9798849434e", 00:17:28.295 "is_configured": true, 00:17:28.295 "data_offset": 2048, 00:17:28.295 "data_size": 63488 00:17:28.295 }, 00:17:28.295 { 00:17:28.295 "name": "BaseBdev2", 00:17:28.295 "uuid": "b3fd3aa4-fab1-46f4-84df-f88bc6fadce2", 00:17:28.295 "is_configured": true, 00:17:28.295 "data_offset": 2048, 00:17:28.295 "data_size": 63488 00:17:28.295 }, 00:17:28.295 { 00:17:28.295 "name": "BaseBdev3", 00:17:28.295 "uuid": "06b18b95-f812-4798-ad8d-e6cc881326df", 00:17:28.295 "is_configured": true, 00:17:28.295 "data_offset": 2048, 00:17:28.295 "data_size": 63488 00:17:28.295 }, 00:17:28.295 { 00:17:28.295 "name": "BaseBdev4", 00:17:28.295 "uuid": "24ea906e-1e97-42bf-a63e-ff220e58000c", 00:17:28.295 "is_configured": true, 00:17:28.295 "data_offset": 2048, 00:17:28.295 "data_size": 63488 00:17:28.295 } 00:17:28.295 ] 00:17:28.295 }' 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.295 09:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.862 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:28.862 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:28.862 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:28.862 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:28.862 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:28.862 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:28.862 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:28.862 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:28.862 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.862 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.863 [2024-11-06 09:10:05.101920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:28.863 "name": "Existed_Raid", 00:17:28.863 "aliases": [ 00:17:28.863 "dcaf18a5-f5af-49c9-ade0-b1c07eff0167" 00:17:28.863 ], 00:17:28.863 "product_name": "Raid Volume", 00:17:28.863 "block_size": 512, 00:17:28.863 "num_blocks": 63488, 00:17:28.863 "uuid": "dcaf18a5-f5af-49c9-ade0-b1c07eff0167", 00:17:28.863 "assigned_rate_limits": { 00:17:28.863 "rw_ios_per_sec": 0, 00:17:28.863 "rw_mbytes_per_sec": 0, 00:17:28.863 "r_mbytes_per_sec": 0, 00:17:28.863 "w_mbytes_per_sec": 0 00:17:28.863 }, 00:17:28.863 "claimed": false, 00:17:28.863 "zoned": false, 00:17:28.863 "supported_io_types": { 00:17:28.863 "read": true, 00:17:28.863 "write": true, 00:17:28.863 "unmap": false, 00:17:28.863 "flush": false, 00:17:28.863 "reset": true, 00:17:28.863 "nvme_admin": false, 00:17:28.863 "nvme_io": false, 00:17:28.863 "nvme_io_md": false, 00:17:28.863 "write_zeroes": true, 00:17:28.863 "zcopy": false, 00:17:28.863 "get_zone_info": false, 00:17:28.863 "zone_management": false, 00:17:28.863 "zone_append": false, 00:17:28.863 "compare": false, 00:17:28.863 "compare_and_write": false, 00:17:28.863 "abort": false, 00:17:28.863 "seek_hole": false, 00:17:28.863 "seek_data": false, 00:17:28.863 "copy": false, 00:17:28.863 "nvme_iov_md": false 00:17:28.863 }, 00:17:28.863 "memory_domains": [ 00:17:28.863 { 00:17:28.863 "dma_device_id": "system", 00:17:28.863 "dma_device_type": 1 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.863 "dma_device_type": 2 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "dma_device_id": "system", 00:17:28.863 "dma_device_type": 1 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.863 "dma_device_type": 2 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "dma_device_id": "system", 00:17:28.863 "dma_device_type": 1 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.863 "dma_device_type": 2 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "dma_device_id": "system", 00:17:28.863 "dma_device_type": 1 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.863 "dma_device_type": 2 00:17:28.863 } 00:17:28.863 ], 00:17:28.863 "driver_specific": { 00:17:28.863 "raid": { 00:17:28.863 "uuid": "dcaf18a5-f5af-49c9-ade0-b1c07eff0167", 00:17:28.863 "strip_size_kb": 0, 00:17:28.863 "state": "online", 00:17:28.863 "raid_level": "raid1", 00:17:28.863 "superblock": true, 00:17:28.863 "num_base_bdevs": 4, 00:17:28.863 "num_base_bdevs_discovered": 4, 00:17:28.863 "num_base_bdevs_operational": 4, 00:17:28.863 "base_bdevs_list": [ 00:17:28.863 { 00:17:28.863 "name": "NewBaseBdev", 00:17:28.863 "uuid": "65f9d36b-a817-4c72-827c-a9798849434e", 00:17:28.863 "is_configured": true, 00:17:28.863 "data_offset": 2048, 00:17:28.863 "data_size": 63488 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "name": "BaseBdev2", 00:17:28.863 "uuid": "b3fd3aa4-fab1-46f4-84df-f88bc6fadce2", 00:17:28.863 "is_configured": true, 00:17:28.863 "data_offset": 2048, 00:17:28.863 "data_size": 63488 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "name": "BaseBdev3", 00:17:28.863 "uuid": "06b18b95-f812-4798-ad8d-e6cc881326df", 00:17:28.863 "is_configured": true, 00:17:28.863 "data_offset": 2048, 00:17:28.863 "data_size": 63488 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "name": "BaseBdev4", 00:17:28.863 "uuid": "24ea906e-1e97-42bf-a63e-ff220e58000c", 00:17:28.863 "is_configured": true, 00:17:28.863 "data_offset": 2048, 00:17:28.863 "data_size": 63488 00:17:28.863 } 00:17:28.863 ] 00:17:28.863 } 00:17:28.863 } 00:17:28.863 }' 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:28.863 BaseBdev2 00:17:28.863 BaseBdev3 00:17:28.863 BaseBdev4' 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:28.863 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.122 [2024-11-06 09:10:05.453569] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.122 [2024-11-06 09:10:05.453604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.122 [2024-11-06 09:10:05.453694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.122 [2024-11-06 09:10:05.454101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.122 [2024-11-06 09:10:05.454127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73932 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 73932 ']' 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 73932 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73932 00:17:29.122 killing process with pid 73932 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73932' 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 73932 00:17:29.122 [2024-11-06 09:10:05.487273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:29.122 09:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 73932 00:17:29.381 [2024-11-06 09:10:05.845423] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.757 09:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:30.757 00:17:30.757 real 0m13.219s 00:17:30.757 user 0m21.998s 00:17:30.757 sys 0m1.831s 00:17:30.757 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:30.757 09:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.757 ************************************ 00:17:30.757 END TEST raid_state_function_test_sb 00:17:30.757 ************************************ 00:17:30.757 09:10:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:17:30.757 09:10:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:30.757 09:10:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:30.757 09:10:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.757 ************************************ 00:17:30.757 START TEST raid_superblock_test 00:17:30.757 ************************************ 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74617 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74617 00:17:30.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74617 ']' 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:30.757 09:10:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.757 [2024-11-06 09:10:07.052213] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:17:30.757 [2024-11-06 09:10:07.052604] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74617 ] 00:17:30.757 [2024-11-06 09:10:07.237902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.017 [2024-11-06 09:10:07.370279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.276 [2024-11-06 09:10:07.580811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.276 [2024-11-06 09:10:07.580897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.535 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.793 malloc1 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.793 [2024-11-06 09:10:08.081645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:31.793 [2024-11-06 09:10:08.081728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.793 [2024-11-06 09:10:08.081765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:31.793 [2024-11-06 09:10:08.081782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.793 [2024-11-06 09:10:08.084730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.793 [2024-11-06 09:10:08.084779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:31.793 pt1 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.793 malloc2 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.793 [2024-11-06 09:10:08.139268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.793 [2024-11-06 09:10:08.139364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.793 [2024-11-06 09:10:08.139416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:31.793 [2024-11-06 09:10:08.139432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.793 [2024-11-06 09:10:08.142446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.793 [2024-11-06 09:10:08.142493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.793 pt2 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.793 malloc3 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.793 [2024-11-06 09:10:08.206308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:31.793 [2024-11-06 09:10:08.206418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.793 [2024-11-06 09:10:08.206456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:31.793 [2024-11-06 09:10:08.206487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.793 [2024-11-06 09:10:08.209557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.793 [2024-11-06 09:10:08.209621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:31.793 pt3 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:31.793 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.794 malloc4 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.794 [2024-11-06 09:10:08.263675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:31.794 [2024-11-06 09:10:08.263950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.794 [2024-11-06 09:10:08.264050] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:31.794 [2024-11-06 09:10:08.264280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.794 [2024-11-06 09:10:08.267445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.794 [2024-11-06 09:10:08.267639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:31.794 pt4 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.794 [2024-11-06 09:10:08.276071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:31.794 [2024-11-06 09:10:08.278887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.794 [2024-11-06 09:10:08.279126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:31.794 [2024-11-06 09:10:08.279387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:31.794 [2024-11-06 09:10:08.279848] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:31.794 [2024-11-06 09:10:08.280024] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:31.794 [2024-11-06 09:10:08.280586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:31.794 [2024-11-06 09:10:08.281054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:31.794 [2024-11-06 09:10:08.281233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:31.794 [2024-11-06 09:10:08.281589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.794 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.052 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.052 "name": "raid_bdev1", 00:17:32.052 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:32.052 "strip_size_kb": 0, 00:17:32.052 "state": "online", 00:17:32.052 "raid_level": "raid1", 00:17:32.052 "superblock": true, 00:17:32.052 "num_base_bdevs": 4, 00:17:32.052 "num_base_bdevs_discovered": 4, 00:17:32.052 "num_base_bdevs_operational": 4, 00:17:32.052 "base_bdevs_list": [ 00:17:32.052 { 00:17:32.052 "name": "pt1", 00:17:32.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.052 "is_configured": true, 00:17:32.052 "data_offset": 2048, 00:17:32.052 "data_size": 63488 00:17:32.052 }, 00:17:32.052 { 00:17:32.052 "name": "pt2", 00:17:32.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.052 "is_configured": true, 00:17:32.052 "data_offset": 2048, 00:17:32.052 "data_size": 63488 00:17:32.052 }, 00:17:32.052 { 00:17:32.052 "name": "pt3", 00:17:32.052 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:32.052 "is_configured": true, 00:17:32.052 "data_offset": 2048, 00:17:32.052 "data_size": 63488 00:17:32.052 }, 00:17:32.052 { 00:17:32.052 "name": "pt4", 00:17:32.052 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:32.052 "is_configured": true, 00:17:32.052 "data_offset": 2048, 00:17:32.052 "data_size": 63488 00:17:32.052 } 00:17:32.052 ] 00:17:32.052 }' 00:17:32.052 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.052 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.313 [2024-11-06 09:10:08.806021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:32.313 "name": "raid_bdev1", 00:17:32.313 "aliases": [ 00:17:32.313 "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533" 00:17:32.313 ], 00:17:32.313 "product_name": "Raid Volume", 00:17:32.313 "block_size": 512, 00:17:32.313 "num_blocks": 63488, 00:17:32.313 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:32.313 "assigned_rate_limits": { 00:17:32.313 "rw_ios_per_sec": 0, 00:17:32.313 "rw_mbytes_per_sec": 0, 00:17:32.313 "r_mbytes_per_sec": 0, 00:17:32.313 "w_mbytes_per_sec": 0 00:17:32.313 }, 00:17:32.313 "claimed": false, 00:17:32.313 "zoned": false, 00:17:32.313 "supported_io_types": { 00:17:32.313 "read": true, 00:17:32.313 "write": true, 00:17:32.313 "unmap": false, 00:17:32.313 "flush": false, 00:17:32.313 "reset": true, 00:17:32.313 "nvme_admin": false, 00:17:32.313 "nvme_io": false, 00:17:32.313 "nvme_io_md": false, 00:17:32.313 "write_zeroes": true, 00:17:32.313 "zcopy": false, 00:17:32.313 "get_zone_info": false, 00:17:32.313 "zone_management": false, 00:17:32.313 "zone_append": false, 00:17:32.313 "compare": false, 00:17:32.313 "compare_and_write": false, 00:17:32.313 "abort": false, 00:17:32.313 "seek_hole": false, 00:17:32.313 "seek_data": false, 00:17:32.313 "copy": false, 00:17:32.313 "nvme_iov_md": false 00:17:32.313 }, 00:17:32.313 "memory_domains": [ 00:17:32.313 { 00:17:32.313 "dma_device_id": "system", 00:17:32.313 "dma_device_type": 1 00:17:32.313 }, 00:17:32.313 { 00:17:32.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.313 "dma_device_type": 2 00:17:32.313 }, 00:17:32.313 { 00:17:32.313 "dma_device_id": "system", 00:17:32.313 "dma_device_type": 1 00:17:32.313 }, 00:17:32.313 { 00:17:32.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.313 "dma_device_type": 2 00:17:32.313 }, 00:17:32.313 { 00:17:32.313 "dma_device_id": "system", 00:17:32.313 "dma_device_type": 1 00:17:32.313 }, 00:17:32.313 { 00:17:32.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.313 "dma_device_type": 2 00:17:32.313 }, 00:17:32.313 { 00:17:32.313 "dma_device_id": "system", 00:17:32.313 "dma_device_type": 1 00:17:32.313 }, 00:17:32.313 { 00:17:32.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.313 "dma_device_type": 2 00:17:32.313 } 00:17:32.313 ], 00:17:32.313 "driver_specific": { 00:17:32.313 "raid": { 00:17:32.313 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:32.313 "strip_size_kb": 0, 00:17:32.313 "state": "online", 00:17:32.313 "raid_level": "raid1", 00:17:32.313 "superblock": true, 00:17:32.313 "num_base_bdevs": 4, 00:17:32.313 "num_base_bdevs_discovered": 4, 00:17:32.313 "num_base_bdevs_operational": 4, 00:17:32.313 "base_bdevs_list": [ 00:17:32.313 { 00:17:32.313 "name": "pt1", 00:17:32.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:32.313 "is_configured": true, 00:17:32.313 "data_offset": 2048, 00:17:32.313 "data_size": 63488 00:17:32.313 }, 00:17:32.313 { 00:17:32.313 "name": "pt2", 00:17:32.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:32.313 "is_configured": true, 00:17:32.313 "data_offset": 2048, 00:17:32.313 "data_size": 63488 00:17:32.313 }, 00:17:32.313 { 00:17:32.313 "name": "pt3", 00:17:32.313 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:32.313 "is_configured": true, 00:17:32.313 "data_offset": 2048, 00:17:32.313 "data_size": 63488 00:17:32.313 }, 00:17:32.313 { 00:17:32.313 "name": "pt4", 00:17:32.313 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:32.313 "is_configured": true, 00:17:32.313 "data_offset": 2048, 00:17:32.313 "data_size": 63488 00:17:32.313 } 00:17:32.313 ] 00:17:32.313 } 00:17:32.313 } 00:17:32.313 }' 00:17:32.313 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:32.586 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:32.586 pt2 00:17:32.586 pt3 00:17:32.586 pt4' 00:17:32.586 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.586 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:32.586 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.586 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:32.586 09:10:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.586 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.586 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.586 09:10:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.586 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.865 [2024-11-06 09:10:09.174121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3a3b24b6-e8a8-41fc-8a47-0f887ebb8533 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3a3b24b6-e8a8-41fc-8a47-0f887ebb8533 ']' 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.865 [2024-11-06 09:10:09.237746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.865 [2024-11-06 09:10:09.237945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.865 [2024-11-06 09:10:09.238187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.865 [2024-11-06 09:10:09.238445] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.865 [2024-11-06 09:10:09.238660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:32.865 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.866 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:32.866 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.866 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.866 [2024-11-06 09:10:09.389767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:32.866 [2024-11-06 09:10:09.392616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:32.866 [2024-11-06 09:10:09.392816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:32.866 [2024-11-06 09:10:09.393095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:32.866 [2024-11-06 09:10:09.393319] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:32.866 [2024-11-06 09:10:09.393589] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:32.866 [2024-11-06 09:10:09.393869] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:32.866 [2024-11-06 09:10:09.394101] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:32.866 [2024-11-06 09:10:09.394291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.866 [2024-11-06 09:10:09.394470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:32.866 request: 00:17:32.866 { 00:17:32.866 "name": "raid_bdev1", 00:17:32.866 "raid_level": "raid1", 00:17:32.866 "base_bdevs": [ 00:17:33.124 "malloc1", 00:17:33.124 "malloc2", 00:17:33.124 "malloc3", 00:17:33.124 "malloc4" 00:17:33.124 ], 00:17:33.124 "superblock": false, 00:17:33.124 "method": "bdev_raid_create", 00:17:33.124 "req_id": 1 00:17:33.124 } 00:17:33.124 Got JSON-RPC error response 00:17:33.124 response: 00:17:33.124 { 00:17:33.124 "code": -17, 00:17:33.124 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:33.124 } 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.124 [2024-11-06 09:10:09.458955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:33.124 [2024-11-06 09:10:09.459168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.124 [2024-11-06 09:10:09.459207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:33.124 [2024-11-06 09:10:09.459237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.124 [2024-11-06 09:10:09.462301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.124 [2024-11-06 09:10:09.462546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:33.124 [2024-11-06 09:10:09.462668] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:33.124 [2024-11-06 09:10:09.462758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:33.124 pt1 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.124 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.125 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.125 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.125 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.125 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.125 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.125 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.125 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.125 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.125 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.125 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.125 "name": "raid_bdev1", 00:17:33.125 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:33.125 "strip_size_kb": 0, 00:17:33.125 "state": "configuring", 00:17:33.125 "raid_level": "raid1", 00:17:33.125 "superblock": true, 00:17:33.125 "num_base_bdevs": 4, 00:17:33.125 "num_base_bdevs_discovered": 1, 00:17:33.125 "num_base_bdevs_operational": 4, 00:17:33.125 "base_bdevs_list": [ 00:17:33.125 { 00:17:33.125 "name": "pt1", 00:17:33.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.125 "is_configured": true, 00:17:33.125 "data_offset": 2048, 00:17:33.125 "data_size": 63488 00:17:33.125 }, 00:17:33.125 { 00:17:33.125 "name": null, 00:17:33.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.125 "is_configured": false, 00:17:33.125 "data_offset": 2048, 00:17:33.125 "data_size": 63488 00:17:33.125 }, 00:17:33.125 { 00:17:33.125 "name": null, 00:17:33.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:33.125 "is_configured": false, 00:17:33.125 "data_offset": 2048, 00:17:33.125 "data_size": 63488 00:17:33.125 }, 00:17:33.125 { 00:17:33.125 "name": null, 00:17:33.125 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:33.125 "is_configured": false, 00:17:33.125 "data_offset": 2048, 00:17:33.125 "data_size": 63488 00:17:33.125 } 00:17:33.125 ] 00:17:33.125 }' 00:17:33.125 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.125 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.700 [2024-11-06 09:10:09.983199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:33.700 [2024-11-06 09:10:09.983489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.700 [2024-11-06 09:10:09.983536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:33.700 [2024-11-06 09:10:09.983557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.700 [2024-11-06 09:10:09.984140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.700 [2024-11-06 09:10:09.984173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:33.700 [2024-11-06 09:10:09.984272] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:33.700 [2024-11-06 09:10:09.984316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.700 pt2 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.700 [2024-11-06 09:10:09.991182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.700 09:10:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.700 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.700 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.700 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.700 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.700 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.701 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.701 "name": "raid_bdev1", 00:17:33.701 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:33.701 "strip_size_kb": 0, 00:17:33.701 "state": "configuring", 00:17:33.701 "raid_level": "raid1", 00:17:33.701 "superblock": true, 00:17:33.701 "num_base_bdevs": 4, 00:17:33.701 "num_base_bdevs_discovered": 1, 00:17:33.701 "num_base_bdevs_operational": 4, 00:17:33.701 "base_bdevs_list": [ 00:17:33.701 { 00:17:33.701 "name": "pt1", 00:17:33.701 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:33.701 "is_configured": true, 00:17:33.701 "data_offset": 2048, 00:17:33.701 "data_size": 63488 00:17:33.701 }, 00:17:33.701 { 00:17:33.701 "name": null, 00:17:33.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:33.701 "is_configured": false, 00:17:33.701 "data_offset": 0, 00:17:33.701 "data_size": 63488 00:17:33.701 }, 00:17:33.701 { 00:17:33.701 "name": null, 00:17:33.701 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:33.701 "is_configured": false, 00:17:33.701 "data_offset": 2048, 00:17:33.701 "data_size": 63488 00:17:33.701 }, 00:17:33.701 { 00:17:33.701 "name": null, 00:17:33.701 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:33.701 "is_configured": false, 00:17:33.701 "data_offset": 2048, 00:17:33.701 "data_size": 63488 00:17:33.701 } 00:17:33.701 ] 00:17:33.701 }' 00:17:33.701 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.701 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.268 [2024-11-06 09:10:10.527361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:34.268 [2024-11-06 09:10:10.527467] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.268 [2024-11-06 09:10:10.527505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:34.268 [2024-11-06 09:10:10.527523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.268 [2024-11-06 09:10:10.528153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.268 [2024-11-06 09:10:10.528358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:34.268 [2024-11-06 09:10:10.528520] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:34.268 [2024-11-06 09:10:10.528555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.268 pt2 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.268 [2024-11-06 09:10:10.539332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:34.268 [2024-11-06 09:10:10.539573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.268 [2024-11-06 09:10:10.539620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:34.268 [2024-11-06 09:10:10.539636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.268 [2024-11-06 09:10:10.540160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.268 [2024-11-06 09:10:10.540193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:34.268 [2024-11-06 09:10:10.540277] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:34.268 [2024-11-06 09:10:10.540305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:34.268 pt3 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.268 [2024-11-06 09:10:10.551317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:34.268 [2024-11-06 09:10:10.551372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.268 [2024-11-06 09:10:10.551400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:34.268 [2024-11-06 09:10:10.551414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.268 [2024-11-06 09:10:10.551904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.268 [2024-11-06 09:10:10.551935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:34.268 [2024-11-06 09:10:10.552038] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:34.268 [2024-11-06 09:10:10.552068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:34.268 [2024-11-06 09:10:10.552278] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:34.268 [2024-11-06 09:10:10.552302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:34.268 [2024-11-06 09:10:10.552658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:34.268 [2024-11-06 09:10:10.552918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:34.268 [2024-11-06 09:10:10.552940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:34.268 [2024-11-06 09:10:10.553124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.268 pt4 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.268 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.268 "name": "raid_bdev1", 00:17:34.268 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:34.268 "strip_size_kb": 0, 00:17:34.268 "state": "online", 00:17:34.268 "raid_level": "raid1", 00:17:34.268 "superblock": true, 00:17:34.268 "num_base_bdevs": 4, 00:17:34.268 "num_base_bdevs_discovered": 4, 00:17:34.268 "num_base_bdevs_operational": 4, 00:17:34.268 "base_bdevs_list": [ 00:17:34.268 { 00:17:34.268 "name": "pt1", 00:17:34.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.268 "is_configured": true, 00:17:34.268 "data_offset": 2048, 00:17:34.268 "data_size": 63488 00:17:34.268 }, 00:17:34.268 { 00:17:34.268 "name": "pt2", 00:17:34.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.268 "is_configured": true, 00:17:34.268 "data_offset": 2048, 00:17:34.268 "data_size": 63488 00:17:34.268 }, 00:17:34.268 { 00:17:34.268 "name": "pt3", 00:17:34.268 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:34.268 "is_configured": true, 00:17:34.269 "data_offset": 2048, 00:17:34.269 "data_size": 63488 00:17:34.269 }, 00:17:34.269 { 00:17:34.269 "name": "pt4", 00:17:34.269 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:34.269 "is_configured": true, 00:17:34.269 "data_offset": 2048, 00:17:34.269 "data_size": 63488 00:17:34.269 } 00:17:34.269 ] 00:17:34.269 }' 00:17:34.269 09:10:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.269 09:10:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.835 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:34.835 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:34.835 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:34.835 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:34.835 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:34.835 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:34.835 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:34.835 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.835 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.835 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.835 [2024-11-06 09:10:11.088025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.835 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.835 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:34.835 "name": "raid_bdev1", 00:17:34.835 "aliases": [ 00:17:34.835 "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533" 00:17:34.835 ], 00:17:34.835 "product_name": "Raid Volume", 00:17:34.835 "block_size": 512, 00:17:34.835 "num_blocks": 63488, 00:17:34.835 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:34.835 "assigned_rate_limits": { 00:17:34.835 "rw_ios_per_sec": 0, 00:17:34.835 "rw_mbytes_per_sec": 0, 00:17:34.835 "r_mbytes_per_sec": 0, 00:17:34.835 "w_mbytes_per_sec": 0 00:17:34.835 }, 00:17:34.835 "claimed": false, 00:17:34.835 "zoned": false, 00:17:34.835 "supported_io_types": { 00:17:34.835 "read": true, 00:17:34.835 "write": true, 00:17:34.835 "unmap": false, 00:17:34.835 "flush": false, 00:17:34.835 "reset": true, 00:17:34.835 "nvme_admin": false, 00:17:34.835 "nvme_io": false, 00:17:34.835 "nvme_io_md": false, 00:17:34.835 "write_zeroes": true, 00:17:34.835 "zcopy": false, 00:17:34.835 "get_zone_info": false, 00:17:34.835 "zone_management": false, 00:17:34.835 "zone_append": false, 00:17:34.835 "compare": false, 00:17:34.835 "compare_and_write": false, 00:17:34.835 "abort": false, 00:17:34.835 "seek_hole": false, 00:17:34.835 "seek_data": false, 00:17:34.835 "copy": false, 00:17:34.835 "nvme_iov_md": false 00:17:34.835 }, 00:17:34.835 "memory_domains": [ 00:17:34.835 { 00:17:34.835 "dma_device_id": "system", 00:17:34.835 "dma_device_type": 1 00:17:34.835 }, 00:17:34.835 { 00:17:34.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.835 "dma_device_type": 2 00:17:34.835 }, 00:17:34.835 { 00:17:34.835 "dma_device_id": "system", 00:17:34.835 "dma_device_type": 1 00:17:34.835 }, 00:17:34.835 { 00:17:34.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.835 "dma_device_type": 2 00:17:34.835 }, 00:17:34.835 { 00:17:34.835 "dma_device_id": "system", 00:17:34.835 "dma_device_type": 1 00:17:34.835 }, 00:17:34.835 { 00:17:34.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.835 "dma_device_type": 2 00:17:34.835 }, 00:17:34.835 { 00:17:34.835 "dma_device_id": "system", 00:17:34.836 "dma_device_type": 1 00:17:34.836 }, 00:17:34.836 { 00:17:34.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.836 "dma_device_type": 2 00:17:34.836 } 00:17:34.836 ], 00:17:34.836 "driver_specific": { 00:17:34.836 "raid": { 00:17:34.836 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:34.836 "strip_size_kb": 0, 00:17:34.836 "state": "online", 00:17:34.836 "raid_level": "raid1", 00:17:34.836 "superblock": true, 00:17:34.836 "num_base_bdevs": 4, 00:17:34.836 "num_base_bdevs_discovered": 4, 00:17:34.836 "num_base_bdevs_operational": 4, 00:17:34.836 "base_bdevs_list": [ 00:17:34.836 { 00:17:34.836 "name": "pt1", 00:17:34.836 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.836 "is_configured": true, 00:17:34.836 "data_offset": 2048, 00:17:34.836 "data_size": 63488 00:17:34.836 }, 00:17:34.836 { 00:17:34.836 "name": "pt2", 00:17:34.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.836 "is_configured": true, 00:17:34.836 "data_offset": 2048, 00:17:34.836 "data_size": 63488 00:17:34.836 }, 00:17:34.836 { 00:17:34.836 "name": "pt3", 00:17:34.836 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:34.836 "is_configured": true, 00:17:34.836 "data_offset": 2048, 00:17:34.836 "data_size": 63488 00:17:34.836 }, 00:17:34.836 { 00:17:34.836 "name": "pt4", 00:17:34.836 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:34.836 "is_configured": true, 00:17:34.836 "data_offset": 2048, 00:17:34.836 "data_size": 63488 00:17:34.836 } 00:17:34.836 ] 00:17:34.836 } 00:17:34.836 } 00:17:34.836 }' 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:34.836 pt2 00:17:34.836 pt3 00:17:34.836 pt4' 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.836 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.095 [2024-11-06 09:10:11.448095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3a3b24b6-e8a8-41fc-8a47-0f887ebb8533 '!=' 3a3b24b6-e8a8-41fc-8a47-0f887ebb8533 ']' 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.095 [2024-11-06 09:10:11.495751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.095 "name": "raid_bdev1", 00:17:35.095 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:35.095 "strip_size_kb": 0, 00:17:35.095 "state": "online", 00:17:35.095 "raid_level": "raid1", 00:17:35.095 "superblock": true, 00:17:35.095 "num_base_bdevs": 4, 00:17:35.095 "num_base_bdevs_discovered": 3, 00:17:35.095 "num_base_bdevs_operational": 3, 00:17:35.095 "base_bdevs_list": [ 00:17:35.095 { 00:17:35.095 "name": null, 00:17:35.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.095 "is_configured": false, 00:17:35.095 "data_offset": 0, 00:17:35.095 "data_size": 63488 00:17:35.095 }, 00:17:35.095 { 00:17:35.095 "name": "pt2", 00:17:35.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.095 "is_configured": true, 00:17:35.095 "data_offset": 2048, 00:17:35.095 "data_size": 63488 00:17:35.095 }, 00:17:35.095 { 00:17:35.095 "name": "pt3", 00:17:35.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:35.095 "is_configured": true, 00:17:35.095 "data_offset": 2048, 00:17:35.095 "data_size": 63488 00:17:35.095 }, 00:17:35.095 { 00:17:35.095 "name": "pt4", 00:17:35.095 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:35.095 "is_configured": true, 00:17:35.095 "data_offset": 2048, 00:17:35.095 "data_size": 63488 00:17:35.095 } 00:17:35.095 ] 00:17:35.095 }' 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.095 09:10:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.663 [2024-11-06 09:10:12.015941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.663 [2024-11-06 09:10:12.016123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:35.663 [2024-11-06 09:10:12.016251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.663 [2024-11-06 09:10:12.016373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.663 [2024-11-06 09:10:12.016392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.663 [2024-11-06 09:10:12.111900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.663 [2024-11-06 09:10:12.111965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.663 [2024-11-06 09:10:12.111995] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:35.663 [2024-11-06 09:10:12.112010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.663 [2024-11-06 09:10:12.115102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.663 [2024-11-06 09:10:12.115275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.663 [2024-11-06 09:10:12.115458] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:35.663 [2024-11-06 09:10:12.115519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.663 pt2 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.663 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.664 "name": "raid_bdev1", 00:17:35.664 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:35.664 "strip_size_kb": 0, 00:17:35.664 "state": "configuring", 00:17:35.664 "raid_level": "raid1", 00:17:35.664 "superblock": true, 00:17:35.664 "num_base_bdevs": 4, 00:17:35.664 "num_base_bdevs_discovered": 1, 00:17:35.664 "num_base_bdevs_operational": 3, 00:17:35.664 "base_bdevs_list": [ 00:17:35.664 { 00:17:35.664 "name": null, 00:17:35.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.664 "is_configured": false, 00:17:35.664 "data_offset": 2048, 00:17:35.664 "data_size": 63488 00:17:35.664 }, 00:17:35.664 { 00:17:35.664 "name": "pt2", 00:17:35.664 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.664 "is_configured": true, 00:17:35.664 "data_offset": 2048, 00:17:35.664 "data_size": 63488 00:17:35.664 }, 00:17:35.664 { 00:17:35.664 "name": null, 00:17:35.664 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:35.664 "is_configured": false, 00:17:35.664 "data_offset": 2048, 00:17:35.664 "data_size": 63488 00:17:35.664 }, 00:17:35.664 { 00:17:35.664 "name": null, 00:17:35.664 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:35.664 "is_configured": false, 00:17:35.664 "data_offset": 2048, 00:17:35.664 "data_size": 63488 00:17:35.664 } 00:17:35.664 ] 00:17:35.664 }' 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.664 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.232 [2024-11-06 09:10:12.644135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:36.232 [2024-11-06 09:10:12.644212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.232 [2024-11-06 09:10:12.644246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:36.232 [2024-11-06 09:10:12.644262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.232 [2024-11-06 09:10:12.644841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.232 [2024-11-06 09:10:12.644897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:36.232 [2024-11-06 09:10:12.645005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:36.232 [2024-11-06 09:10:12.645038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:36.232 pt3 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.232 "name": "raid_bdev1", 00:17:36.232 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:36.232 "strip_size_kb": 0, 00:17:36.232 "state": "configuring", 00:17:36.232 "raid_level": "raid1", 00:17:36.232 "superblock": true, 00:17:36.232 "num_base_bdevs": 4, 00:17:36.232 "num_base_bdevs_discovered": 2, 00:17:36.232 "num_base_bdevs_operational": 3, 00:17:36.232 "base_bdevs_list": [ 00:17:36.232 { 00:17:36.232 "name": null, 00:17:36.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.232 "is_configured": false, 00:17:36.232 "data_offset": 2048, 00:17:36.232 "data_size": 63488 00:17:36.232 }, 00:17:36.232 { 00:17:36.232 "name": "pt2", 00:17:36.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.232 "is_configured": true, 00:17:36.232 "data_offset": 2048, 00:17:36.232 "data_size": 63488 00:17:36.232 }, 00:17:36.232 { 00:17:36.232 "name": "pt3", 00:17:36.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:36.232 "is_configured": true, 00:17:36.232 "data_offset": 2048, 00:17:36.232 "data_size": 63488 00:17:36.232 }, 00:17:36.232 { 00:17:36.232 "name": null, 00:17:36.232 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:36.232 "is_configured": false, 00:17:36.232 "data_offset": 2048, 00:17:36.232 "data_size": 63488 00:17:36.232 } 00:17:36.232 ] 00:17:36.232 }' 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.232 09:10:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.800 [2024-11-06 09:10:13.164307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:36.800 [2024-11-06 09:10:13.164382] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.800 [2024-11-06 09:10:13.164417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:36.800 [2024-11-06 09:10:13.164432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.800 [2024-11-06 09:10:13.165066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.800 [2024-11-06 09:10:13.165093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:36.800 [2024-11-06 09:10:13.165195] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:36.800 [2024-11-06 09:10:13.165235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:36.800 [2024-11-06 09:10:13.165414] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:36.800 [2024-11-06 09:10:13.165430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:36.800 [2024-11-06 09:10:13.165773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:36.800 [2024-11-06 09:10:13.166000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:36.800 [2024-11-06 09:10:13.166022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:36.800 [2024-11-06 09:10:13.166187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.800 pt4 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.800 "name": "raid_bdev1", 00:17:36.800 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:36.800 "strip_size_kb": 0, 00:17:36.800 "state": "online", 00:17:36.800 "raid_level": "raid1", 00:17:36.800 "superblock": true, 00:17:36.800 "num_base_bdevs": 4, 00:17:36.800 "num_base_bdevs_discovered": 3, 00:17:36.800 "num_base_bdevs_operational": 3, 00:17:36.800 "base_bdevs_list": [ 00:17:36.800 { 00:17:36.800 "name": null, 00:17:36.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.800 "is_configured": false, 00:17:36.800 "data_offset": 2048, 00:17:36.800 "data_size": 63488 00:17:36.800 }, 00:17:36.800 { 00:17:36.800 "name": "pt2", 00:17:36.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.800 "is_configured": true, 00:17:36.800 "data_offset": 2048, 00:17:36.800 "data_size": 63488 00:17:36.800 }, 00:17:36.800 { 00:17:36.800 "name": "pt3", 00:17:36.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:36.800 "is_configured": true, 00:17:36.800 "data_offset": 2048, 00:17:36.800 "data_size": 63488 00:17:36.800 }, 00:17:36.800 { 00:17:36.800 "name": "pt4", 00:17:36.800 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:36.800 "is_configured": true, 00:17:36.800 "data_offset": 2048, 00:17:36.800 "data_size": 63488 00:17:36.800 } 00:17:36.800 ] 00:17:36.800 }' 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.800 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 [2024-11-06 09:10:13.700408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.368 [2024-11-06 09:10:13.700443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.368 [2024-11-06 09:10:13.700548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.368 [2024-11-06 09:10:13.700642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.368 [2024-11-06 09:10:13.700662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 [2024-11-06 09:10:13.772449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:37.368 [2024-11-06 09:10:13.772544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.368 [2024-11-06 09:10:13.772576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:37.368 [2024-11-06 09:10:13.772594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.368 [2024-11-06 09:10:13.775794] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.368 [2024-11-06 09:10:13.776003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:37.368 [2024-11-06 09:10:13.776139] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:37.368 [2024-11-06 09:10:13.776208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:37.368 [2024-11-06 09:10:13.776388] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:37.368 [2024-11-06 09:10:13.776412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.368 [2024-11-06 09:10:13.776433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:37.368 [2024-11-06 09:10:13.776517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.368 [2024-11-06 09:10:13.776732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:37.368 pt1 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.368 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.368 "name": "raid_bdev1", 00:17:37.368 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:37.368 "strip_size_kb": 0, 00:17:37.368 "state": "configuring", 00:17:37.368 "raid_level": "raid1", 00:17:37.368 "superblock": true, 00:17:37.368 "num_base_bdevs": 4, 00:17:37.368 "num_base_bdevs_discovered": 2, 00:17:37.368 "num_base_bdevs_operational": 3, 00:17:37.368 "base_bdevs_list": [ 00:17:37.368 { 00:17:37.368 "name": null, 00:17:37.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.368 "is_configured": false, 00:17:37.368 "data_offset": 2048, 00:17:37.368 "data_size": 63488 00:17:37.368 }, 00:17:37.368 { 00:17:37.368 "name": "pt2", 00:17:37.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.368 "is_configured": true, 00:17:37.368 "data_offset": 2048, 00:17:37.368 "data_size": 63488 00:17:37.368 }, 00:17:37.368 { 00:17:37.368 "name": "pt3", 00:17:37.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:37.368 "is_configured": true, 00:17:37.368 "data_offset": 2048, 00:17:37.368 "data_size": 63488 00:17:37.369 }, 00:17:37.369 { 00:17:37.369 "name": null, 00:17:37.369 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:37.369 "is_configured": false, 00:17:37.369 "data_offset": 2048, 00:17:37.369 "data_size": 63488 00:17:37.369 } 00:17:37.369 ] 00:17:37.369 }' 00:17:37.369 09:10:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.369 09:10:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.936 [2024-11-06 09:10:14.348676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:37.936 [2024-11-06 09:10:14.348781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.936 [2024-11-06 09:10:14.348815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:37.936 [2024-11-06 09:10:14.348846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.936 [2024-11-06 09:10:14.349398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.936 [2024-11-06 09:10:14.349432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:37.936 [2024-11-06 09:10:14.349536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:37.936 [2024-11-06 09:10:14.349591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:37.936 [2024-11-06 09:10:14.349784] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:37.936 [2024-11-06 09:10:14.349800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:37.936 [2024-11-06 09:10:14.350152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:37.936 [2024-11-06 09:10:14.350341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:37.936 [2024-11-06 09:10:14.350362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:37.936 [2024-11-06 09:10:14.350549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.936 pt4 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.936 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.937 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.937 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.937 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.937 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.937 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.937 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.937 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.937 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.937 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.937 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.937 "name": "raid_bdev1", 00:17:37.937 "uuid": "3a3b24b6-e8a8-41fc-8a47-0f887ebb8533", 00:17:37.937 "strip_size_kb": 0, 00:17:37.937 "state": "online", 00:17:37.937 "raid_level": "raid1", 00:17:37.937 "superblock": true, 00:17:37.937 "num_base_bdevs": 4, 00:17:37.937 "num_base_bdevs_discovered": 3, 00:17:37.937 "num_base_bdevs_operational": 3, 00:17:37.937 "base_bdevs_list": [ 00:17:37.937 { 00:17:37.937 "name": null, 00:17:37.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.937 "is_configured": false, 00:17:37.937 "data_offset": 2048, 00:17:37.937 "data_size": 63488 00:17:37.937 }, 00:17:37.937 { 00:17:37.937 "name": "pt2", 00:17:37.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.937 "is_configured": true, 00:17:37.937 "data_offset": 2048, 00:17:37.937 "data_size": 63488 00:17:37.937 }, 00:17:37.937 { 00:17:37.937 "name": "pt3", 00:17:37.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:37.937 "is_configured": true, 00:17:37.937 "data_offset": 2048, 00:17:37.937 "data_size": 63488 00:17:37.937 }, 00:17:37.937 { 00:17:37.937 "name": "pt4", 00:17:37.937 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:37.937 "is_configured": true, 00:17:37.937 "data_offset": 2048, 00:17:37.937 "data_size": 63488 00:17:37.937 } 00:17:37.937 ] 00:17:37.937 }' 00:17:37.937 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.937 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.504 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:38.504 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:38.504 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.504 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.504 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.504 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:38.504 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:38.504 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.504 09:10:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:38.504 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.504 [2024-11-06 09:10:14.965273] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.504 09:10:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.504 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3a3b24b6-e8a8-41fc-8a47-0f887ebb8533 '!=' 3a3b24b6-e8a8-41fc-8a47-0f887ebb8533 ']' 00:17:38.504 09:10:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74617 00:17:38.504 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74617 ']' 00:17:38.504 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74617 00:17:38.504 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:17:38.504 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:38.504 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74617 00:17:38.762 killing process with pid 74617 00:17:38.762 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:38.763 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:38.763 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74617' 00:17:38.763 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74617 00:17:38.763 [2024-11-06 09:10:15.044276] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:38.763 09:10:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74617 00:17:38.763 [2024-11-06 09:10:15.044396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.763 [2024-11-06 09:10:15.044498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.763 [2024-11-06 09:10:15.044519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:39.020 [2024-11-06 09:10:15.422954] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.397 ************************************ 00:17:40.397 END TEST raid_superblock_test 00:17:40.397 ************************************ 00:17:40.397 09:10:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:40.397 00:17:40.397 real 0m9.544s 00:17:40.397 user 0m15.674s 00:17:40.397 sys 0m1.378s 00:17:40.397 09:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:40.397 09:10:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.397 09:10:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:17:40.397 09:10:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:40.397 09:10:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:40.397 09:10:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.397 ************************************ 00:17:40.397 START TEST raid_read_error_test 00:17:40.397 ************************************ 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.HuujRjaQ9s 00:17:40.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75121 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75121 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75121 ']' 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:40.397 09:10:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.397 [2024-11-06 09:10:16.672286] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:17:40.397 [2024-11-06 09:10:16.672483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75121 ] 00:17:40.397 [2024-11-06 09:10:16.861961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.688 [2024-11-06 09:10:17.023435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.946 [2024-11-06 09:10:17.266668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.946 [2024-11-06 09:10:17.266730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.205 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:41.205 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:41.205 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:41.205 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:41.205 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.205 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.464 BaseBdev1_malloc 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.464 true 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.464 [2024-11-06 09:10:17.765711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:41.464 [2024-11-06 09:10:17.765784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.464 [2024-11-06 09:10:17.765834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:41.464 [2024-11-06 09:10:17.765858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.464 [2024-11-06 09:10:17.768642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.464 [2024-11-06 09:10:17.768697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:41.464 BaseBdev1 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.464 BaseBdev2_malloc 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.464 true 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.464 [2024-11-06 09:10:17.825810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:41.464 [2024-11-06 09:10:17.825906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.464 [2024-11-06 09:10:17.825935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:41.464 [2024-11-06 09:10:17.825953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.464 [2024-11-06 09:10:17.828729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.464 [2024-11-06 09:10:17.828782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:41.464 BaseBdev2 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.464 BaseBdev3_malloc 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.464 true 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.464 [2024-11-06 09:10:17.900499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:41.464 [2024-11-06 09:10:17.900706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.464 [2024-11-06 09:10:17.900748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:41.464 [2024-11-06 09:10:17.900770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.464 [2024-11-06 09:10:17.903674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.464 [2024-11-06 09:10:17.903855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:41.464 BaseBdev3 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.464 BaseBdev4_malloc 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.464 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.464 true 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.465 [2024-11-06 09:10:17.960848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:41.465 [2024-11-06 09:10:17.960920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.465 [2024-11-06 09:10:17.960957] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:41.465 [2024-11-06 09:10:17.960975] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.465 [2024-11-06 09:10:17.964248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.465 [2024-11-06 09:10:17.964317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:41.465 BaseBdev4 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.465 [2024-11-06 09:10:17.973033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.465 [2024-11-06 09:10:17.975520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:41.465 [2024-11-06 09:10:17.975777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:41.465 [2024-11-06 09:10:17.975933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:41.465 [2024-11-06 09:10:17.976248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:41.465 [2024-11-06 09:10:17.976272] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:41.465 [2024-11-06 09:10:17.976621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:41.465 [2024-11-06 09:10:17.976864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:41.465 [2024-11-06 09:10:17.976882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:41.465 [2024-11-06 09:10:17.977136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.465 09:10:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.723 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.723 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.723 "name": "raid_bdev1", 00:17:41.723 "uuid": "50fe306d-e0ff-4995-8350-430b5fb74c91", 00:17:41.723 "strip_size_kb": 0, 00:17:41.723 "state": "online", 00:17:41.723 "raid_level": "raid1", 00:17:41.723 "superblock": true, 00:17:41.723 "num_base_bdevs": 4, 00:17:41.723 "num_base_bdevs_discovered": 4, 00:17:41.723 "num_base_bdevs_operational": 4, 00:17:41.723 "base_bdevs_list": [ 00:17:41.723 { 00:17:41.723 "name": "BaseBdev1", 00:17:41.723 "uuid": "b6168afb-91fd-567e-bc33-4d98565355f0", 00:17:41.723 "is_configured": true, 00:17:41.723 "data_offset": 2048, 00:17:41.723 "data_size": 63488 00:17:41.723 }, 00:17:41.723 { 00:17:41.723 "name": "BaseBdev2", 00:17:41.723 "uuid": "1f36087c-a104-55cc-b9d5-c057b902d84d", 00:17:41.723 "is_configured": true, 00:17:41.723 "data_offset": 2048, 00:17:41.723 "data_size": 63488 00:17:41.723 }, 00:17:41.723 { 00:17:41.723 "name": "BaseBdev3", 00:17:41.723 "uuid": "ffab22dd-b1a7-56d7-8027-d750452acf51", 00:17:41.723 "is_configured": true, 00:17:41.723 "data_offset": 2048, 00:17:41.723 "data_size": 63488 00:17:41.723 }, 00:17:41.723 { 00:17:41.723 "name": "BaseBdev4", 00:17:41.723 "uuid": "562d6e7a-bc2f-5ff6-8934-9877e0163533", 00:17:41.723 "is_configured": true, 00:17:41.723 "data_offset": 2048, 00:17:41.723 "data_size": 63488 00:17:41.723 } 00:17:41.723 ] 00:17:41.723 }' 00:17:41.723 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.723 09:10:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.309 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:42.309 09:10:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:42.309 [2024-11-06 09:10:18.626656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:43.246 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.247 "name": "raid_bdev1", 00:17:43.247 "uuid": "50fe306d-e0ff-4995-8350-430b5fb74c91", 00:17:43.247 "strip_size_kb": 0, 00:17:43.247 "state": "online", 00:17:43.247 "raid_level": "raid1", 00:17:43.247 "superblock": true, 00:17:43.247 "num_base_bdevs": 4, 00:17:43.247 "num_base_bdevs_discovered": 4, 00:17:43.247 "num_base_bdevs_operational": 4, 00:17:43.247 "base_bdevs_list": [ 00:17:43.247 { 00:17:43.247 "name": "BaseBdev1", 00:17:43.247 "uuid": "b6168afb-91fd-567e-bc33-4d98565355f0", 00:17:43.247 "is_configured": true, 00:17:43.247 "data_offset": 2048, 00:17:43.247 "data_size": 63488 00:17:43.247 }, 00:17:43.247 { 00:17:43.247 "name": "BaseBdev2", 00:17:43.247 "uuid": "1f36087c-a104-55cc-b9d5-c057b902d84d", 00:17:43.247 "is_configured": true, 00:17:43.247 "data_offset": 2048, 00:17:43.247 "data_size": 63488 00:17:43.247 }, 00:17:43.247 { 00:17:43.247 "name": "BaseBdev3", 00:17:43.247 "uuid": "ffab22dd-b1a7-56d7-8027-d750452acf51", 00:17:43.247 "is_configured": true, 00:17:43.247 "data_offset": 2048, 00:17:43.247 "data_size": 63488 00:17:43.247 }, 00:17:43.247 { 00:17:43.247 "name": "BaseBdev4", 00:17:43.247 "uuid": "562d6e7a-bc2f-5ff6-8934-9877e0163533", 00:17:43.247 "is_configured": true, 00:17:43.247 "data_offset": 2048, 00:17:43.247 "data_size": 63488 00:17:43.247 } 00:17:43.247 ] 00:17:43.247 }' 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.247 09:10:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.506 09:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:43.506 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.506 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.764 [2024-11-06 09:10:20.039906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.764 [2024-11-06 09:10:20.039946] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.764 [2024-11-06 09:10:20.043526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.764 [2024-11-06 09:10:20.043735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.764 [2024-11-06 09:10:20.044123] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.764 [2024-11-06 09:10:20.044301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:43.764 { 00:17:43.764 "results": [ 00:17:43.764 { 00:17:43.764 "job": "raid_bdev1", 00:17:43.764 "core_mask": "0x1", 00:17:43.764 "workload": "randrw", 00:17:43.764 "percentage": 50, 00:17:43.764 "status": "finished", 00:17:43.764 "queue_depth": 1, 00:17:43.764 "io_size": 131072, 00:17:43.764 "runtime": 1.410917, 00:17:43.764 "iops": 7232.884712566366, 00:17:43.764 "mibps": 904.1105890707958, 00:17:43.764 "io_failed": 0, 00:17:43.765 "io_timeout": 0, 00:17:43.765 "avg_latency_us": 133.96374753908512, 00:17:43.765 "min_latency_us": 43.985454545454544, 00:17:43.765 "max_latency_us": 1817.1345454545456 00:17:43.765 } 00:17:43.765 ], 00:17:43.765 "core_count": 1 00:17:43.765 } 00:17:43.765 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.765 09:10:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75121 00:17:43.765 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75121 ']' 00:17:43.765 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75121 00:17:43.765 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:17:43.765 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:43.765 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75121 00:17:43.765 killing process with pid 75121 00:17:43.765 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:43.765 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:43.765 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75121' 00:17:43.765 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75121 00:17:43.765 09:10:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75121 00:17:43.765 [2024-11-06 09:10:20.086729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.032 [2024-11-06 09:10:20.380736] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.968 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:44.968 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.HuujRjaQ9s 00:17:44.968 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:44.968 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:44.968 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:44.968 ************************************ 00:17:44.968 END TEST raid_read_error_test 00:17:44.968 ************************************ 00:17:44.968 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:44.968 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:44.968 09:10:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:44.968 00:17:44.968 real 0m4.916s 00:17:44.968 user 0m6.070s 00:17:44.968 sys 0m0.621s 00:17:44.968 09:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:44.968 09:10:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.226 09:10:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:17:45.226 09:10:21 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:45.226 09:10:21 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:45.226 09:10:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.226 ************************************ 00:17:45.226 START TEST raid_write_error_test 00:17:45.226 ************************************ 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7E9N7iAtmb 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75267 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75267 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75267 ']' 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:45.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:45.226 09:10:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.226 [2024-11-06 09:10:21.632341] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:17:45.226 [2024-11-06 09:10:21.632507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75267 ] 00:17:45.485 [2024-11-06 09:10:21.804843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.485 [2024-11-06 09:10:21.940156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.743 [2024-11-06 09:10:22.143915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.743 [2024-11-06 09:10:22.144000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 BaseBdev1_malloc 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 true 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 [2024-11-06 09:10:22.665236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:46.309 [2024-11-06 09:10:22.665313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.309 [2024-11-06 09:10:22.665347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:46.309 [2024-11-06 09:10:22.665367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.309 [2024-11-06 09:10:22.668246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.309 [2024-11-06 09:10:22.668299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:46.309 BaseBdev1 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 BaseBdev2_malloc 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 true 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 [2024-11-06 09:10:22.722347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:46.309 [2024-11-06 09:10:22.722598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.309 [2024-11-06 09:10:22.722653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:46.309 [2024-11-06 09:10:22.722678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.309 [2024-11-06 09:10:22.725473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.309 [2024-11-06 09:10:22.725525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:46.309 BaseBdev2 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 BaseBdev3_malloc 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 true 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 [2024-11-06 09:10:22.788612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:46.309 [2024-11-06 09:10:22.788849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.309 [2024-11-06 09:10:22.788904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:46.309 [2024-11-06 09:10:22.788925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.309 [2024-11-06 09:10:22.791772] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.309 [2024-11-06 09:10:22.791951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:46.309 BaseBdev3 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 BaseBdev4_malloc 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.309 true 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.309 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.568 [2024-11-06 09:10:22.845300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:17:46.568 [2024-11-06 09:10:22.845372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.568 [2024-11-06 09:10:22.845404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:46.568 [2024-11-06 09:10:22.845423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.568 [2024-11-06 09:10:22.848293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.568 [2024-11-06 09:10:22.848349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:46.568 BaseBdev4 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.568 [2024-11-06 09:10:22.853352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.568 [2024-11-06 09:10:22.855873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.568 [2024-11-06 09:10:22.855994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:46.568 [2024-11-06 09:10:22.856100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:46.568 [2024-11-06 09:10:22.856409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:46.568 [2024-11-06 09:10:22.856434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:46.568 [2024-11-06 09:10:22.856755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:17:46.568 [2024-11-06 09:10:22.857002] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:46.568 [2024-11-06 09:10:22.857019] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:46.568 [2024-11-06 09:10:22.857270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.568 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.568 "name": "raid_bdev1", 00:17:46.568 "uuid": "d6835ef3-22f4-46e4-aaf5-63fadc314b53", 00:17:46.568 "strip_size_kb": 0, 00:17:46.568 "state": "online", 00:17:46.568 "raid_level": "raid1", 00:17:46.568 "superblock": true, 00:17:46.568 "num_base_bdevs": 4, 00:17:46.568 "num_base_bdevs_discovered": 4, 00:17:46.568 "num_base_bdevs_operational": 4, 00:17:46.568 "base_bdevs_list": [ 00:17:46.568 { 00:17:46.568 "name": "BaseBdev1", 00:17:46.568 "uuid": "d3830c98-6f35-56c9-90fc-635e0e77dd09", 00:17:46.568 "is_configured": true, 00:17:46.568 "data_offset": 2048, 00:17:46.568 "data_size": 63488 00:17:46.568 }, 00:17:46.568 { 00:17:46.568 "name": "BaseBdev2", 00:17:46.568 "uuid": "f87f0eed-109c-5a51-a757-487ddddb9609", 00:17:46.568 "is_configured": true, 00:17:46.569 "data_offset": 2048, 00:17:46.569 "data_size": 63488 00:17:46.569 }, 00:17:46.569 { 00:17:46.569 "name": "BaseBdev3", 00:17:46.569 "uuid": "29857eda-7d94-5f70-b269-c1614867958b", 00:17:46.569 "is_configured": true, 00:17:46.569 "data_offset": 2048, 00:17:46.569 "data_size": 63488 00:17:46.569 }, 00:17:46.569 { 00:17:46.569 "name": "BaseBdev4", 00:17:46.569 "uuid": "464ee7bc-10e9-54fd-9be2-23c29a66623c", 00:17:46.569 "is_configured": true, 00:17:46.569 "data_offset": 2048, 00:17:46.569 "data_size": 63488 00:17:46.569 } 00:17:46.569 ] 00:17:46.569 }' 00:17:46.569 09:10:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.569 09:10:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.859 09:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:17:46.859 09:10:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:47.148 [2024-11-06 09:10:23.490918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.081 [2024-11-06 09:10:24.365200] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:48.081 [2024-11-06 09:10:24.365270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.081 [2024-11-06 09:10:24.365547] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.081 "name": "raid_bdev1", 00:17:48.081 "uuid": "d6835ef3-22f4-46e4-aaf5-63fadc314b53", 00:17:48.081 "strip_size_kb": 0, 00:17:48.081 "state": "online", 00:17:48.081 "raid_level": "raid1", 00:17:48.081 "superblock": true, 00:17:48.081 "num_base_bdevs": 4, 00:17:48.081 "num_base_bdevs_discovered": 3, 00:17:48.081 "num_base_bdevs_operational": 3, 00:17:48.081 "base_bdevs_list": [ 00:17:48.081 { 00:17:48.081 "name": null, 00:17:48.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.081 "is_configured": false, 00:17:48.081 "data_offset": 0, 00:17:48.081 "data_size": 63488 00:17:48.081 }, 00:17:48.081 { 00:17:48.081 "name": "BaseBdev2", 00:17:48.081 "uuid": "f87f0eed-109c-5a51-a757-487ddddb9609", 00:17:48.081 "is_configured": true, 00:17:48.081 "data_offset": 2048, 00:17:48.081 "data_size": 63488 00:17:48.081 }, 00:17:48.081 { 00:17:48.081 "name": "BaseBdev3", 00:17:48.081 "uuid": "29857eda-7d94-5f70-b269-c1614867958b", 00:17:48.081 "is_configured": true, 00:17:48.081 "data_offset": 2048, 00:17:48.081 "data_size": 63488 00:17:48.081 }, 00:17:48.081 { 00:17:48.081 "name": "BaseBdev4", 00:17:48.081 "uuid": "464ee7bc-10e9-54fd-9be2-23c29a66623c", 00:17:48.081 "is_configured": true, 00:17:48.081 "data_offset": 2048, 00:17:48.081 "data_size": 63488 00:17:48.081 } 00:17:48.081 ] 00:17:48.081 }' 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.081 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.339 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:48.340 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.340 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.340 [2024-11-06 09:10:24.873942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.340 [2024-11-06 09:10:24.874115] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.598 [2024-11-06 09:10:24.877530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.598 [2024-11-06 09:10:24.877718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.598 [2024-11-06 09:10:24.877980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:17:48.598 "results": [ 00:17:48.598 { 00:17:48.598 "job": "raid_bdev1", 00:17:48.598 "core_mask": "0x1", 00:17:48.598 "workload": "randrw", 00:17:48.598 "percentage": 50, 00:17:48.598 "status": "finished", 00:17:48.598 "queue_depth": 1, 00:17:48.598 "io_size": 131072, 00:17:48.598 "runtime": 1.380636, 00:17:48.598 "iops": 7939.819039920732, 00:17:48.598 "mibps": 992.4773799900915, 00:17:48.598 "io_failed": 0, 00:17:48.598 "io_timeout": 0, 00:17:48.598 "avg_latency_us": 121.53334726576107, 00:17:48.598 "min_latency_us": 43.985454545454544, 00:17:48.598 "max_latency_us": 1921.3963636363637 00:17:48.598 } 00:17:48.598 ], 00:17:48.598 "core_count": 1 00:17:48.598 } 00:17:48.598 ee all in destruct 00:17:48.598 [2024-11-06 09:10:24.878116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:48.598 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.598 09:10:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75267 00:17:48.598 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75267 ']' 00:17:48.598 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75267 00:17:48.598 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:17:48.598 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:48.598 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75267 00:17:48.598 killing process with pid 75267 00:17:48.598 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:48.598 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:48.598 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75267' 00:17:48.598 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75267 00:17:48.598 09:10:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75267 00:17:48.598 [2024-11-06 09:10:24.913313] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:48.857 [2024-11-06 09:10:25.212568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.795 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7E9N7iAtmb 00:17:49.795 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:17:49.795 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:17:49.795 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:17:49.795 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:17:49.795 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:49.795 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:49.795 09:10:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:49.795 00:17:49.795 real 0m4.798s 00:17:49.795 user 0m5.862s 00:17:49.795 sys 0m0.610s 00:17:49.795 ************************************ 00:17:49.795 END TEST raid_write_error_test 00:17:49.795 ************************************ 00:17:49.795 09:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:49.795 09:10:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.053 09:10:26 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:17:50.053 09:10:26 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:17:50.053 09:10:26 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:17:50.053 09:10:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:50.053 09:10:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:50.053 09:10:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:50.053 ************************************ 00:17:50.053 START TEST raid_rebuild_test 00:17:50.053 ************************************ 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:50.053 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75405 00:17:50.054 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:50.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.054 09:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75405 00:17:50.054 09:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 75405 ']' 00:17:50.054 09:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.054 09:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:50.054 09:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.054 09:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:50.054 09:10:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:50.054 [2024-11-06 09:10:26.462057] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:17:50.054 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:50.054 Zero copy mechanism will not be used. 00:17:50.054 [2024-11-06 09:10:26.462363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75405 ] 00:17:50.313 [2024-11-06 09:10:26.641747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.313 [2024-11-06 09:10:26.796379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.573 [2024-11-06 09:10:27.014318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:50.573 [2024-11-06 09:10:27.014401] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.140 BaseBdev1_malloc 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.140 [2024-11-06 09:10:27.487969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:51.140 [2024-11-06 09:10:27.488454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.140 [2024-11-06 09:10:27.488501] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:51.140 [2024-11-06 09:10:27.488524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.140 [2024-11-06 09:10:27.491379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.140 [2024-11-06 09:10:27.491435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:51.140 BaseBdev1 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.140 BaseBdev2_malloc 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.140 [2024-11-06 09:10:27.540323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:51.140 [2024-11-06 09:10:27.540547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.140 [2024-11-06 09:10:27.540590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:51.140 [2024-11-06 09:10:27.540613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.140 [2024-11-06 09:10:27.543414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.140 [2024-11-06 09:10:27.543467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:51.140 BaseBdev2 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.140 spare_malloc 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.140 spare_delay 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.140 [2024-11-06 09:10:27.614581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:51.140 [2024-11-06 09:10:27.614663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.140 [2024-11-06 09:10:27.614697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:51.140 [2024-11-06 09:10:27.614716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.140 [2024-11-06 09:10:27.617639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.140 [2024-11-06 09:10:27.617697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:51.140 spare 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.140 [2024-11-06 09:10:27.622700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:51.140 [2024-11-06 09:10:27.625315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.140 [2024-11-06 09:10:27.625571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:51.140 [2024-11-06 09:10:27.625604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:51.140 [2024-11-06 09:10:27.625968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:51.140 [2024-11-06 09:10:27.626183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:51.140 [2024-11-06 09:10:27.626203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:51.140 [2024-11-06 09:10:27.626402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.140 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.141 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.141 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.399 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.399 "name": "raid_bdev1", 00:17:51.399 "uuid": "d40c0f4f-8824-4811-8cad-786fe67cc14d", 00:17:51.399 "strip_size_kb": 0, 00:17:51.399 "state": "online", 00:17:51.399 "raid_level": "raid1", 00:17:51.399 "superblock": false, 00:17:51.399 "num_base_bdevs": 2, 00:17:51.399 "num_base_bdevs_discovered": 2, 00:17:51.399 "num_base_bdevs_operational": 2, 00:17:51.399 "base_bdevs_list": [ 00:17:51.399 { 00:17:51.399 "name": "BaseBdev1", 00:17:51.399 "uuid": "65e8592f-df9c-5c68-9601-064bc16a7283", 00:17:51.399 "is_configured": true, 00:17:51.399 "data_offset": 0, 00:17:51.399 "data_size": 65536 00:17:51.399 }, 00:17:51.399 { 00:17:51.399 "name": "BaseBdev2", 00:17:51.399 "uuid": "c3a5c435-f42f-580e-9f37-e4e6e0d4ba06", 00:17:51.399 "is_configured": true, 00:17:51.399 "data_offset": 0, 00:17:51.399 "data_size": 65536 00:17:51.399 } 00:17:51.399 ] 00:17:51.399 }' 00:17:51.399 09:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.399 09:10:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.658 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:51.658 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:51.658 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.658 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.658 [2024-11-06 09:10:28.123203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.658 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.658 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:51.658 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:51.658 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.658 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.658 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.658 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:51.917 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:52.175 [2024-11-06 09:10:28.466996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:52.175 /dev/nbd0 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:52.176 1+0 records in 00:17:52.176 1+0 records out 00:17:52.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000652184 s, 6.3 MB/s 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:52.176 09:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:58.744 65536+0 records in 00:17:58.744 65536+0 records out 00:17:58.744 33554432 bytes (34 MB, 32 MiB) copied, 6.53458 s, 5.1 MB/s 00:17:58.744 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:58.744 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:58.744 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:58.744 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:58.744 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:58.744 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:58.744 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:59.003 [2024-11-06 09:10:35.384541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.003 [2024-11-06 09:10:35.417768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.003 "name": "raid_bdev1", 00:17:59.003 "uuid": "d40c0f4f-8824-4811-8cad-786fe67cc14d", 00:17:59.003 "strip_size_kb": 0, 00:17:59.003 "state": "online", 00:17:59.003 "raid_level": "raid1", 00:17:59.003 "superblock": false, 00:17:59.003 "num_base_bdevs": 2, 00:17:59.003 "num_base_bdevs_discovered": 1, 00:17:59.003 "num_base_bdevs_operational": 1, 00:17:59.003 "base_bdevs_list": [ 00:17:59.003 { 00:17:59.003 "name": null, 00:17:59.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.003 "is_configured": false, 00:17:59.003 "data_offset": 0, 00:17:59.003 "data_size": 65536 00:17:59.003 }, 00:17:59.003 { 00:17:59.003 "name": "BaseBdev2", 00:17:59.003 "uuid": "c3a5c435-f42f-580e-9f37-e4e6e0d4ba06", 00:17:59.003 "is_configured": true, 00:17:59.003 "data_offset": 0, 00:17:59.003 "data_size": 65536 00:17:59.003 } 00:17:59.003 ] 00:17:59.003 }' 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.003 09:10:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.571 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:59.571 09:10:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.571 09:10:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.571 [2024-11-06 09:10:35.937918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.571 [2024-11-06 09:10:35.954380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:17:59.571 09:10:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.571 09:10:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:59.571 [2024-11-06 09:10:35.956952] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:00.508 09:10:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.508 09:10:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.508 09:10:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.508 09:10:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.508 09:10:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.508 09:10:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.508 09:10:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.508 09:10:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.508 09:10:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.508 09:10:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.508 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.508 "name": "raid_bdev1", 00:18:00.508 "uuid": "d40c0f4f-8824-4811-8cad-786fe67cc14d", 00:18:00.508 "strip_size_kb": 0, 00:18:00.508 "state": "online", 00:18:00.509 "raid_level": "raid1", 00:18:00.509 "superblock": false, 00:18:00.509 "num_base_bdevs": 2, 00:18:00.509 "num_base_bdevs_discovered": 2, 00:18:00.509 "num_base_bdevs_operational": 2, 00:18:00.509 "process": { 00:18:00.509 "type": "rebuild", 00:18:00.509 "target": "spare", 00:18:00.509 "progress": { 00:18:00.509 "blocks": 20480, 00:18:00.509 "percent": 31 00:18:00.509 } 00:18:00.509 }, 00:18:00.509 "base_bdevs_list": [ 00:18:00.509 { 00:18:00.509 "name": "spare", 00:18:00.509 "uuid": "1160ed51-ca42-58a8-b7fe-5f621e6952df", 00:18:00.509 "is_configured": true, 00:18:00.509 "data_offset": 0, 00:18:00.509 "data_size": 65536 00:18:00.509 }, 00:18:00.509 { 00:18:00.509 "name": "BaseBdev2", 00:18:00.509 "uuid": "c3a5c435-f42f-580e-9f37-e4e6e0d4ba06", 00:18:00.509 "is_configured": true, 00:18:00.509 "data_offset": 0, 00:18:00.509 "data_size": 65536 00:18:00.509 } 00:18:00.509 ] 00:18:00.509 }' 00:18:00.509 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.827 [2024-11-06 09:10:37.119168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:00.827 [2024-11-06 09:10:37.167220] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:00.827 [2024-11-06 09:10:37.167341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.827 [2024-11-06 09:10:37.167368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:00.827 [2024-11-06 09:10:37.167386] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.827 "name": "raid_bdev1", 00:18:00.827 "uuid": "d40c0f4f-8824-4811-8cad-786fe67cc14d", 00:18:00.827 "strip_size_kb": 0, 00:18:00.827 "state": "online", 00:18:00.827 "raid_level": "raid1", 00:18:00.827 "superblock": false, 00:18:00.827 "num_base_bdevs": 2, 00:18:00.827 "num_base_bdevs_discovered": 1, 00:18:00.827 "num_base_bdevs_operational": 1, 00:18:00.827 "base_bdevs_list": [ 00:18:00.827 { 00:18:00.827 "name": null, 00:18:00.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.827 "is_configured": false, 00:18:00.827 "data_offset": 0, 00:18:00.827 "data_size": 65536 00:18:00.827 }, 00:18:00.827 { 00:18:00.827 "name": "BaseBdev2", 00:18:00.827 "uuid": "c3a5c435-f42f-580e-9f37-e4e6e0d4ba06", 00:18:00.827 "is_configured": true, 00:18:00.827 "data_offset": 0, 00:18:00.827 "data_size": 65536 00:18:00.827 } 00:18:00.827 ] 00:18:00.827 }' 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.827 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.395 "name": "raid_bdev1", 00:18:01.395 "uuid": "d40c0f4f-8824-4811-8cad-786fe67cc14d", 00:18:01.395 "strip_size_kb": 0, 00:18:01.395 "state": "online", 00:18:01.395 "raid_level": "raid1", 00:18:01.395 "superblock": false, 00:18:01.395 "num_base_bdevs": 2, 00:18:01.395 "num_base_bdevs_discovered": 1, 00:18:01.395 "num_base_bdevs_operational": 1, 00:18:01.395 "base_bdevs_list": [ 00:18:01.395 { 00:18:01.395 "name": null, 00:18:01.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.395 "is_configured": false, 00:18:01.395 "data_offset": 0, 00:18:01.395 "data_size": 65536 00:18:01.395 }, 00:18:01.395 { 00:18:01.395 "name": "BaseBdev2", 00:18:01.395 "uuid": "c3a5c435-f42f-580e-9f37-e4e6e0d4ba06", 00:18:01.395 "is_configured": true, 00:18:01.395 "data_offset": 0, 00:18:01.395 "data_size": 65536 00:18:01.395 } 00:18:01.395 ] 00:18:01.395 }' 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.395 [2024-11-06 09:10:37.860014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.395 [2024-11-06 09:10:37.875967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.395 09:10:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:01.395 [2024-11-06 09:10:37.878724] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.771 "name": "raid_bdev1", 00:18:02.771 "uuid": "d40c0f4f-8824-4811-8cad-786fe67cc14d", 00:18:02.771 "strip_size_kb": 0, 00:18:02.771 "state": "online", 00:18:02.771 "raid_level": "raid1", 00:18:02.771 "superblock": false, 00:18:02.771 "num_base_bdevs": 2, 00:18:02.771 "num_base_bdevs_discovered": 2, 00:18:02.771 "num_base_bdevs_operational": 2, 00:18:02.771 "process": { 00:18:02.771 "type": "rebuild", 00:18:02.771 "target": "spare", 00:18:02.771 "progress": { 00:18:02.771 "blocks": 20480, 00:18:02.771 "percent": 31 00:18:02.771 } 00:18:02.771 }, 00:18:02.771 "base_bdevs_list": [ 00:18:02.771 { 00:18:02.771 "name": "spare", 00:18:02.771 "uuid": "1160ed51-ca42-58a8-b7fe-5f621e6952df", 00:18:02.771 "is_configured": true, 00:18:02.771 "data_offset": 0, 00:18:02.771 "data_size": 65536 00:18:02.771 }, 00:18:02.771 { 00:18:02.771 "name": "BaseBdev2", 00:18:02.771 "uuid": "c3a5c435-f42f-580e-9f37-e4e6e0d4ba06", 00:18:02.771 "is_configured": true, 00:18:02.771 "data_offset": 0, 00:18:02.771 "data_size": 65536 00:18:02.771 } 00:18:02.771 ] 00:18:02.771 }' 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.771 09:10:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=399 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.771 "name": "raid_bdev1", 00:18:02.771 "uuid": "d40c0f4f-8824-4811-8cad-786fe67cc14d", 00:18:02.771 "strip_size_kb": 0, 00:18:02.771 "state": "online", 00:18:02.771 "raid_level": "raid1", 00:18:02.771 "superblock": false, 00:18:02.771 "num_base_bdevs": 2, 00:18:02.771 "num_base_bdevs_discovered": 2, 00:18:02.771 "num_base_bdevs_operational": 2, 00:18:02.771 "process": { 00:18:02.771 "type": "rebuild", 00:18:02.771 "target": "spare", 00:18:02.771 "progress": { 00:18:02.771 "blocks": 22528, 00:18:02.771 "percent": 34 00:18:02.771 } 00:18:02.771 }, 00:18:02.771 "base_bdevs_list": [ 00:18:02.771 { 00:18:02.771 "name": "spare", 00:18:02.771 "uuid": "1160ed51-ca42-58a8-b7fe-5f621e6952df", 00:18:02.771 "is_configured": true, 00:18:02.771 "data_offset": 0, 00:18:02.771 "data_size": 65536 00:18:02.771 }, 00:18:02.771 { 00:18:02.771 "name": "BaseBdev2", 00:18:02.771 "uuid": "c3a5c435-f42f-580e-9f37-e4e6e0d4ba06", 00:18:02.771 "is_configured": true, 00:18:02.771 "data_offset": 0, 00:18:02.771 "data_size": 65536 00:18:02.771 } 00:18:02.771 ] 00:18:02.771 }' 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.771 09:10:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:03.704 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.704 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.704 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.704 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.704 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.704 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.704 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.704 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.704 09:10:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.704 09:10:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.704 09:10:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.704 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.704 "name": "raid_bdev1", 00:18:03.704 "uuid": "d40c0f4f-8824-4811-8cad-786fe67cc14d", 00:18:03.704 "strip_size_kb": 0, 00:18:03.704 "state": "online", 00:18:03.704 "raid_level": "raid1", 00:18:03.704 "superblock": false, 00:18:03.704 "num_base_bdevs": 2, 00:18:03.704 "num_base_bdevs_discovered": 2, 00:18:03.704 "num_base_bdevs_operational": 2, 00:18:03.704 "process": { 00:18:03.704 "type": "rebuild", 00:18:03.704 "target": "spare", 00:18:03.704 "progress": { 00:18:03.704 "blocks": 45056, 00:18:03.704 "percent": 68 00:18:03.704 } 00:18:03.704 }, 00:18:03.704 "base_bdevs_list": [ 00:18:03.704 { 00:18:03.704 "name": "spare", 00:18:03.704 "uuid": "1160ed51-ca42-58a8-b7fe-5f621e6952df", 00:18:03.704 "is_configured": true, 00:18:03.704 "data_offset": 0, 00:18:03.704 "data_size": 65536 00:18:03.704 }, 00:18:03.704 { 00:18:03.704 "name": "BaseBdev2", 00:18:03.704 "uuid": "c3a5c435-f42f-580e-9f37-e4e6e0d4ba06", 00:18:03.704 "is_configured": true, 00:18:03.704 "data_offset": 0, 00:18:03.704 "data_size": 65536 00:18:03.704 } 00:18:03.704 ] 00:18:03.704 }' 00:18:03.962 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.962 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:03.962 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.962 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.962 09:10:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:04.897 [2024-11-06 09:10:41.102625] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:04.897 [2024-11-06 09:10:41.102729] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:04.897 [2024-11-06 09:10:41.102798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.897 "name": "raid_bdev1", 00:18:04.897 "uuid": "d40c0f4f-8824-4811-8cad-786fe67cc14d", 00:18:04.897 "strip_size_kb": 0, 00:18:04.897 "state": "online", 00:18:04.897 "raid_level": "raid1", 00:18:04.897 "superblock": false, 00:18:04.897 "num_base_bdevs": 2, 00:18:04.897 "num_base_bdevs_discovered": 2, 00:18:04.897 "num_base_bdevs_operational": 2, 00:18:04.897 "base_bdevs_list": [ 00:18:04.897 { 00:18:04.897 "name": "spare", 00:18:04.897 "uuid": "1160ed51-ca42-58a8-b7fe-5f621e6952df", 00:18:04.897 "is_configured": true, 00:18:04.897 "data_offset": 0, 00:18:04.897 "data_size": 65536 00:18:04.897 }, 00:18:04.897 { 00:18:04.897 "name": "BaseBdev2", 00:18:04.897 "uuid": "c3a5c435-f42f-580e-9f37-e4e6e0d4ba06", 00:18:04.897 "is_configured": true, 00:18:04.897 "data_offset": 0, 00:18:04.897 "data_size": 65536 00:18:04.897 } 00:18:04.897 ] 00:18:04.897 }' 00:18:04.897 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.155 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:05.155 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.155 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:05.155 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:05.155 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.155 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.155 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:05.155 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:05.155 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.156 "name": "raid_bdev1", 00:18:05.156 "uuid": "d40c0f4f-8824-4811-8cad-786fe67cc14d", 00:18:05.156 "strip_size_kb": 0, 00:18:05.156 "state": "online", 00:18:05.156 "raid_level": "raid1", 00:18:05.156 "superblock": false, 00:18:05.156 "num_base_bdevs": 2, 00:18:05.156 "num_base_bdevs_discovered": 2, 00:18:05.156 "num_base_bdevs_operational": 2, 00:18:05.156 "base_bdevs_list": [ 00:18:05.156 { 00:18:05.156 "name": "spare", 00:18:05.156 "uuid": "1160ed51-ca42-58a8-b7fe-5f621e6952df", 00:18:05.156 "is_configured": true, 00:18:05.156 "data_offset": 0, 00:18:05.156 "data_size": 65536 00:18:05.156 }, 00:18:05.156 { 00:18:05.156 "name": "BaseBdev2", 00:18:05.156 "uuid": "c3a5c435-f42f-580e-9f37-e4e6e0d4ba06", 00:18:05.156 "is_configured": true, 00:18:05.156 "data_offset": 0, 00:18:05.156 "data_size": 65536 00:18:05.156 } 00:18:05.156 ] 00:18:05.156 }' 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.156 09:10:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.414 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.414 "name": "raid_bdev1", 00:18:05.414 "uuid": "d40c0f4f-8824-4811-8cad-786fe67cc14d", 00:18:05.414 "strip_size_kb": 0, 00:18:05.414 "state": "online", 00:18:05.414 "raid_level": "raid1", 00:18:05.414 "superblock": false, 00:18:05.414 "num_base_bdevs": 2, 00:18:05.414 "num_base_bdevs_discovered": 2, 00:18:05.414 "num_base_bdevs_operational": 2, 00:18:05.414 "base_bdevs_list": [ 00:18:05.414 { 00:18:05.414 "name": "spare", 00:18:05.414 "uuid": "1160ed51-ca42-58a8-b7fe-5f621e6952df", 00:18:05.414 "is_configured": true, 00:18:05.414 "data_offset": 0, 00:18:05.414 "data_size": 65536 00:18:05.414 }, 00:18:05.414 { 00:18:05.414 "name": "BaseBdev2", 00:18:05.414 "uuid": "c3a5c435-f42f-580e-9f37-e4e6e0d4ba06", 00:18:05.414 "is_configured": true, 00:18:05.414 "data_offset": 0, 00:18:05.414 "data_size": 65536 00:18:05.414 } 00:18:05.414 ] 00:18:05.414 }' 00:18:05.414 09:10:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.414 09:10:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.672 09:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:05.672 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.672 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.672 [2024-11-06 09:10:42.174985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:05.672 [2024-11-06 09:10:42.175186] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:05.672 [2024-11-06 09:10:42.175311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.672 [2024-11-06 09:10:42.175414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.672 [2024-11-06 09:10:42.175432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:05.672 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.672 09:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:05.672 09:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.672 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.672 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.672 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:05.931 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:06.190 /dev/nbd0 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:06.190 1+0 records in 00:18:06.190 1+0 records out 00:18:06.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511555 s, 8.0 MB/s 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:06.190 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:06.448 /dev/nbd1 00:18:06.448 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:06.449 1+0 records in 00:18:06.449 1+0 records out 00:18:06.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480081 s, 8.5 MB/s 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:06.449 09:10:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:06.707 09:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:06.707 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:06.707 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:06.707 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:06.707 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:06.707 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.707 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:06.966 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:06.966 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:06.966 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:06.966 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:06.966 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:06.966 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:06.966 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:06.966 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:06.966 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:06.966 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:07.225 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:07.225 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:07.225 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:07.225 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:07.225 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:07.225 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75405 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 75405 ']' 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 75405 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75405 00:18:07.484 killing process with pid 75405 00:18:07.484 Received shutdown signal, test time was about 60.000000 seconds 00:18:07.484 00:18:07.484 Latency(us) 00:18:07.484 [2024-11-06T09:10:44.019Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.484 [2024-11-06T09:10:44.019Z] =================================================================================================================== 00:18:07.484 [2024-11-06T09:10:44.019Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75405' 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 75405 00:18:07.484 [2024-11-06 09:10:43.792340] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.484 09:10:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 75405 00:18:07.742 [2024-11-06 09:10:44.065619] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:08.707 00:18:08.707 real 0m18.740s 00:18:08.707 user 0m21.065s 00:18:08.707 sys 0m3.578s 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.707 ************************************ 00:18:08.707 END TEST raid_rebuild_test 00:18:08.707 ************************************ 00:18:08.707 09:10:45 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:18:08.707 09:10:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:08.707 09:10:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:08.707 09:10:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:08.707 ************************************ 00:18:08.707 START TEST raid_rebuild_test_sb 00:18:08.707 ************************************ 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:08.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75863 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75863 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75863 ']' 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:08.707 09:10:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.966 [2024-11-06 09:10:45.284173] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:18:08.966 [2024-11-06 09:10:45.284622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75863 ] 00:18:08.966 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:08.966 Zero copy mechanism will not be used. 00:18:08.966 [2024-11-06 09:10:45.470204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.224 [2024-11-06 09:10:45.610309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.483 [2024-11-06 09:10:45.815829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.483 [2024-11-06 09:10:45.815876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.051 BaseBdev1_malloc 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.051 [2024-11-06 09:10:46.365502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:10.051 [2024-11-06 09:10:46.365745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.051 [2024-11-06 09:10:46.365913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:10.051 [2024-11-06 09:10:46.366043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.051 [2024-11-06 09:10:46.369045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.051 BaseBdev1 00:18:10.051 [2024-11-06 09:10:46.369226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.051 BaseBdev2_malloc 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.051 [2024-11-06 09:10:46.417835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:10.051 [2024-11-06 09:10:46.418344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.051 [2024-11-06 09:10:46.418386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:10.051 [2024-11-06 09:10:46.418408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.051 [2024-11-06 09:10:46.421244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.051 [2024-11-06 09:10:46.421294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:10.051 BaseBdev2 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.051 spare_malloc 00:18:10.051 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.052 spare_delay 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.052 [2024-11-06 09:10:46.493596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:10.052 [2024-11-06 09:10:46.493863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.052 [2024-11-06 09:10:46.493909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:10.052 [2024-11-06 09:10:46.493930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.052 [2024-11-06 09:10:46.496868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.052 [2024-11-06 09:10:46.496921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:10.052 spare 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.052 [2024-11-06 09:10:46.501621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.052 [2024-11-06 09:10:46.504201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.052 [2024-11-06 09:10:46.504440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:10.052 [2024-11-06 09:10:46.504468] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:10.052 [2024-11-06 09:10:46.504809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:10.052 [2024-11-06 09:10:46.505051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:10.052 [2024-11-06 09:10:46.505067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:10.052 [2024-11-06 09:10:46.505282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.052 "name": "raid_bdev1", 00:18:10.052 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:10.052 "strip_size_kb": 0, 00:18:10.052 "state": "online", 00:18:10.052 "raid_level": "raid1", 00:18:10.052 "superblock": true, 00:18:10.052 "num_base_bdevs": 2, 00:18:10.052 "num_base_bdevs_discovered": 2, 00:18:10.052 "num_base_bdevs_operational": 2, 00:18:10.052 "base_bdevs_list": [ 00:18:10.052 { 00:18:10.052 "name": "BaseBdev1", 00:18:10.052 "uuid": "c7409341-10f4-5b00-aed6-b4a666df5da4", 00:18:10.052 "is_configured": true, 00:18:10.052 "data_offset": 2048, 00:18:10.052 "data_size": 63488 00:18:10.052 }, 00:18:10.052 { 00:18:10.052 "name": "BaseBdev2", 00:18:10.052 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:10.052 "is_configured": true, 00:18:10.052 "data_offset": 2048, 00:18:10.052 "data_size": 63488 00:18:10.052 } 00:18:10.052 ] 00:18:10.052 }' 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.052 09:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.620 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:10.620 09:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.620 [2024-11-06 09:10:47.010149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:10.620 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:10.947 [2024-11-06 09:10:47.353982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:10.947 /dev/nbd0 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:10.947 1+0 records in 00:18:10.947 1+0 records out 00:18:10.947 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388449 s, 10.5 MB/s 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:10.947 09:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:18:17.511 63488+0 records in 00:18:17.511 63488+0 records out 00:18:17.511 32505856 bytes (33 MB, 31 MiB) copied, 6.07302 s, 5.4 MB/s 00:18:17.511 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:17.511 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:17.511 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:17.511 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:17.511 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:17.512 [2024-11-06 09:10:53.798333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.512 [2024-11-06 09:10:53.810156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.512 "name": "raid_bdev1", 00:18:17.512 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:17.512 "strip_size_kb": 0, 00:18:17.512 "state": "online", 00:18:17.512 "raid_level": "raid1", 00:18:17.512 "superblock": true, 00:18:17.512 "num_base_bdevs": 2, 00:18:17.512 "num_base_bdevs_discovered": 1, 00:18:17.512 "num_base_bdevs_operational": 1, 00:18:17.512 "base_bdevs_list": [ 00:18:17.512 { 00:18:17.512 "name": null, 00:18:17.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.512 "is_configured": false, 00:18:17.512 "data_offset": 0, 00:18:17.512 "data_size": 63488 00:18:17.512 }, 00:18:17.512 { 00:18:17.512 "name": "BaseBdev2", 00:18:17.512 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:17.512 "is_configured": true, 00:18:17.512 "data_offset": 2048, 00:18:17.512 "data_size": 63488 00:18:17.512 } 00:18:17.512 ] 00:18:17.512 }' 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.512 09:10:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.079 09:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:18.079 09:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.079 09:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.079 [2024-11-06 09:10:54.334250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.079 [2024-11-06 09:10:54.350549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:18:18.079 09:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.079 09:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:18.079 [2024-11-06 09:10:54.353107] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:19.013 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.013 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.014 "name": "raid_bdev1", 00:18:19.014 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:19.014 "strip_size_kb": 0, 00:18:19.014 "state": "online", 00:18:19.014 "raid_level": "raid1", 00:18:19.014 "superblock": true, 00:18:19.014 "num_base_bdevs": 2, 00:18:19.014 "num_base_bdevs_discovered": 2, 00:18:19.014 "num_base_bdevs_operational": 2, 00:18:19.014 "process": { 00:18:19.014 "type": "rebuild", 00:18:19.014 "target": "spare", 00:18:19.014 "progress": { 00:18:19.014 "blocks": 20480, 00:18:19.014 "percent": 32 00:18:19.014 } 00:18:19.014 }, 00:18:19.014 "base_bdevs_list": [ 00:18:19.014 { 00:18:19.014 "name": "spare", 00:18:19.014 "uuid": "346f8c89-d2f4-5b75-9900-661d8f8c4447", 00:18:19.014 "is_configured": true, 00:18:19.014 "data_offset": 2048, 00:18:19.014 "data_size": 63488 00:18:19.014 }, 00:18:19.014 { 00:18:19.014 "name": "BaseBdev2", 00:18:19.014 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:19.014 "is_configured": true, 00:18:19.014 "data_offset": 2048, 00:18:19.014 "data_size": 63488 00:18:19.014 } 00:18:19.014 ] 00:18:19.014 }' 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.014 09:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.014 [2024-11-06 09:10:55.506552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.270 [2024-11-06 09:10:55.561797] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:19.270 [2024-11-06 09:10:55.562140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.270 [2024-11-06 09:10:55.562284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.270 [2024-11-06 09:10:55.562344] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.270 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.270 "name": "raid_bdev1", 00:18:19.270 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:19.270 "strip_size_kb": 0, 00:18:19.271 "state": "online", 00:18:19.271 "raid_level": "raid1", 00:18:19.271 "superblock": true, 00:18:19.271 "num_base_bdevs": 2, 00:18:19.271 "num_base_bdevs_discovered": 1, 00:18:19.271 "num_base_bdevs_operational": 1, 00:18:19.271 "base_bdevs_list": [ 00:18:19.271 { 00:18:19.271 "name": null, 00:18:19.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.271 "is_configured": false, 00:18:19.271 "data_offset": 0, 00:18:19.271 "data_size": 63488 00:18:19.271 }, 00:18:19.271 { 00:18:19.271 "name": "BaseBdev2", 00:18:19.271 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:19.271 "is_configured": true, 00:18:19.271 "data_offset": 2048, 00:18:19.271 "data_size": 63488 00:18:19.271 } 00:18:19.271 ] 00:18:19.271 }' 00:18:19.271 09:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.271 09:10:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.837 "name": "raid_bdev1", 00:18:19.837 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:19.837 "strip_size_kb": 0, 00:18:19.837 "state": "online", 00:18:19.837 "raid_level": "raid1", 00:18:19.837 "superblock": true, 00:18:19.837 "num_base_bdevs": 2, 00:18:19.837 "num_base_bdevs_discovered": 1, 00:18:19.837 "num_base_bdevs_operational": 1, 00:18:19.837 "base_bdevs_list": [ 00:18:19.837 { 00:18:19.837 "name": null, 00:18:19.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.837 "is_configured": false, 00:18:19.837 "data_offset": 0, 00:18:19.837 "data_size": 63488 00:18:19.837 }, 00:18:19.837 { 00:18:19.837 "name": "BaseBdev2", 00:18:19.837 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:19.837 "is_configured": true, 00:18:19.837 "data_offset": 2048, 00:18:19.837 "data_size": 63488 00:18:19.837 } 00:18:19.837 ] 00:18:19.837 }' 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:19.837 [2024-11-06 09:10:56.254471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.837 [2024-11-06 09:10:56.269866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.837 09:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:19.837 [2024-11-06 09:10:56.272440] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:20.770 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.770 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.770 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.770 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.770 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.770 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.770 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.770 09:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.770 09:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.770 09:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.028 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.028 "name": "raid_bdev1", 00:18:21.028 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:21.028 "strip_size_kb": 0, 00:18:21.028 "state": "online", 00:18:21.028 "raid_level": "raid1", 00:18:21.028 "superblock": true, 00:18:21.028 "num_base_bdevs": 2, 00:18:21.028 "num_base_bdevs_discovered": 2, 00:18:21.028 "num_base_bdevs_operational": 2, 00:18:21.028 "process": { 00:18:21.028 "type": "rebuild", 00:18:21.028 "target": "spare", 00:18:21.028 "progress": { 00:18:21.028 "blocks": 20480, 00:18:21.028 "percent": 32 00:18:21.028 } 00:18:21.028 }, 00:18:21.028 "base_bdevs_list": [ 00:18:21.028 { 00:18:21.028 "name": "spare", 00:18:21.028 "uuid": "346f8c89-d2f4-5b75-9900-661d8f8c4447", 00:18:21.028 "is_configured": true, 00:18:21.028 "data_offset": 2048, 00:18:21.028 "data_size": 63488 00:18:21.028 }, 00:18:21.028 { 00:18:21.028 "name": "BaseBdev2", 00:18:21.028 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:21.028 "is_configured": true, 00:18:21.028 "data_offset": 2048, 00:18:21.028 "data_size": 63488 00:18:21.028 } 00:18:21.028 ] 00:18:21.028 }' 00:18:21.028 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.028 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.028 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.028 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.028 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:21.029 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=417 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.029 "name": "raid_bdev1", 00:18:21.029 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:21.029 "strip_size_kb": 0, 00:18:21.029 "state": "online", 00:18:21.029 "raid_level": "raid1", 00:18:21.029 "superblock": true, 00:18:21.029 "num_base_bdevs": 2, 00:18:21.029 "num_base_bdevs_discovered": 2, 00:18:21.029 "num_base_bdevs_operational": 2, 00:18:21.029 "process": { 00:18:21.029 "type": "rebuild", 00:18:21.029 "target": "spare", 00:18:21.029 "progress": { 00:18:21.029 "blocks": 22528, 00:18:21.029 "percent": 35 00:18:21.029 } 00:18:21.029 }, 00:18:21.029 "base_bdevs_list": [ 00:18:21.029 { 00:18:21.029 "name": "spare", 00:18:21.029 "uuid": "346f8c89-d2f4-5b75-9900-661d8f8c4447", 00:18:21.029 "is_configured": true, 00:18:21.029 "data_offset": 2048, 00:18:21.029 "data_size": 63488 00:18:21.029 }, 00:18:21.029 { 00:18:21.029 "name": "BaseBdev2", 00:18:21.029 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:21.029 "is_configured": true, 00:18:21.029 "data_offset": 2048, 00:18:21.029 "data_size": 63488 00:18:21.029 } 00:18:21.029 ] 00:18:21.029 }' 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.029 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.287 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.287 09:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:22.222 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.222 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.222 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.222 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.222 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.222 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.222 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.222 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.223 09:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.223 09:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.223 09:10:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.223 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.223 "name": "raid_bdev1", 00:18:22.223 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:22.223 "strip_size_kb": 0, 00:18:22.223 "state": "online", 00:18:22.223 "raid_level": "raid1", 00:18:22.223 "superblock": true, 00:18:22.223 "num_base_bdevs": 2, 00:18:22.223 "num_base_bdevs_discovered": 2, 00:18:22.223 "num_base_bdevs_operational": 2, 00:18:22.223 "process": { 00:18:22.223 "type": "rebuild", 00:18:22.223 "target": "spare", 00:18:22.223 "progress": { 00:18:22.223 "blocks": 47104, 00:18:22.223 "percent": 74 00:18:22.223 } 00:18:22.223 }, 00:18:22.223 "base_bdevs_list": [ 00:18:22.223 { 00:18:22.223 "name": "spare", 00:18:22.223 "uuid": "346f8c89-d2f4-5b75-9900-661d8f8c4447", 00:18:22.223 "is_configured": true, 00:18:22.223 "data_offset": 2048, 00:18:22.223 "data_size": 63488 00:18:22.223 }, 00:18:22.223 { 00:18:22.223 "name": "BaseBdev2", 00:18:22.223 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:22.223 "is_configured": true, 00:18:22.223 "data_offset": 2048, 00:18:22.223 "data_size": 63488 00:18:22.223 } 00:18:22.223 ] 00:18:22.223 }' 00:18:22.223 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.223 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.223 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.223 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.223 09:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:23.156 [2024-11-06 09:10:59.394367] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:23.157 [2024-11-06 09:10:59.394461] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:23.157 [2024-11-06 09:10:59.394621] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.415 "name": "raid_bdev1", 00:18:23.415 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:23.415 "strip_size_kb": 0, 00:18:23.415 "state": "online", 00:18:23.415 "raid_level": "raid1", 00:18:23.415 "superblock": true, 00:18:23.415 "num_base_bdevs": 2, 00:18:23.415 "num_base_bdevs_discovered": 2, 00:18:23.415 "num_base_bdevs_operational": 2, 00:18:23.415 "base_bdevs_list": [ 00:18:23.415 { 00:18:23.415 "name": "spare", 00:18:23.415 "uuid": "346f8c89-d2f4-5b75-9900-661d8f8c4447", 00:18:23.415 "is_configured": true, 00:18:23.415 "data_offset": 2048, 00:18:23.415 "data_size": 63488 00:18:23.415 }, 00:18:23.415 { 00:18:23.415 "name": "BaseBdev2", 00:18:23.415 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:23.415 "is_configured": true, 00:18:23.415 "data_offset": 2048, 00:18:23.415 "data_size": 63488 00:18:23.415 } 00:18:23.415 ] 00:18:23.415 }' 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.415 "name": "raid_bdev1", 00:18:23.415 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:23.415 "strip_size_kb": 0, 00:18:23.415 "state": "online", 00:18:23.415 "raid_level": "raid1", 00:18:23.415 "superblock": true, 00:18:23.415 "num_base_bdevs": 2, 00:18:23.415 "num_base_bdevs_discovered": 2, 00:18:23.415 "num_base_bdevs_operational": 2, 00:18:23.415 "base_bdevs_list": [ 00:18:23.415 { 00:18:23.415 "name": "spare", 00:18:23.415 "uuid": "346f8c89-d2f4-5b75-9900-661d8f8c4447", 00:18:23.415 "is_configured": true, 00:18:23.415 "data_offset": 2048, 00:18:23.415 "data_size": 63488 00:18:23.415 }, 00:18:23.415 { 00:18:23.415 "name": "BaseBdev2", 00:18:23.415 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:23.415 "is_configured": true, 00:18:23.415 "data_offset": 2048, 00:18:23.415 "data_size": 63488 00:18:23.415 } 00:18:23.415 ] 00:18:23.415 }' 00:18:23.415 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.673 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.673 09:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.673 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.673 "name": "raid_bdev1", 00:18:23.673 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:23.673 "strip_size_kb": 0, 00:18:23.673 "state": "online", 00:18:23.673 "raid_level": "raid1", 00:18:23.673 "superblock": true, 00:18:23.673 "num_base_bdevs": 2, 00:18:23.673 "num_base_bdevs_discovered": 2, 00:18:23.673 "num_base_bdevs_operational": 2, 00:18:23.673 "base_bdevs_list": [ 00:18:23.674 { 00:18:23.674 "name": "spare", 00:18:23.674 "uuid": "346f8c89-d2f4-5b75-9900-661d8f8c4447", 00:18:23.674 "is_configured": true, 00:18:23.674 "data_offset": 2048, 00:18:23.674 "data_size": 63488 00:18:23.674 }, 00:18:23.674 { 00:18:23.674 "name": "BaseBdev2", 00:18:23.674 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:23.674 "is_configured": true, 00:18:23.674 "data_offset": 2048, 00:18:23.674 "data_size": 63488 00:18:23.674 } 00:18:23.674 ] 00:18:23.674 }' 00:18:23.674 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.674 09:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.240 [2024-11-06 09:11:00.574423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.240 [2024-11-06 09:11:00.574466] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.240 [2024-11-06 09:11:00.574573] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.240 [2024-11-06 09:11:00.574672] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.240 [2024-11-06 09:11:00.574689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.240 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:24.498 /dev/nbd0 00:18:24.498 09:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.498 1+0 records in 00:18:24.498 1+0 records out 00:18:24.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320626 s, 12.8 MB/s 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.498 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:25.068 /dev/nbd1 00:18:25.068 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:25.068 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:25.068 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:25.068 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:18:25.068 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:25.068 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:25.068 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:25.068 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:18:25.068 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:25.069 1+0 records in 00:18:25.069 1+0 records out 00:18:25.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419131 s, 9.8 MB/s 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.069 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:25.636 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.636 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.636 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.636 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.636 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.636 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.636 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:25.636 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.636 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.636 09:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:25.895 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:25.895 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:25.895 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.896 [2024-11-06 09:11:02.249448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:25.896 [2024-11-06 09:11:02.249526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.896 [2024-11-06 09:11:02.249562] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:25.896 [2024-11-06 09:11:02.249578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.896 [2024-11-06 09:11:02.252446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.896 [2024-11-06 09:11:02.252492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:25.896 [2024-11-06 09:11:02.252610] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:25.896 [2024-11-06 09:11:02.252670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.896 [2024-11-06 09:11:02.252874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.896 spare 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.896 [2024-11-06 09:11:02.353014] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:25.896 [2024-11-06 09:11:02.353306] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:25.896 [2024-11-06 09:11:02.353776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:18:25.896 [2024-11-06 09:11:02.354182] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:25.896 [2024-11-06 09:11:02.354209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:25.896 [2024-11-06 09:11:02.354462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.896 "name": "raid_bdev1", 00:18:25.896 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:25.896 "strip_size_kb": 0, 00:18:25.896 "state": "online", 00:18:25.896 "raid_level": "raid1", 00:18:25.896 "superblock": true, 00:18:25.896 "num_base_bdevs": 2, 00:18:25.896 "num_base_bdevs_discovered": 2, 00:18:25.896 "num_base_bdevs_operational": 2, 00:18:25.896 "base_bdevs_list": [ 00:18:25.896 { 00:18:25.896 "name": "spare", 00:18:25.896 "uuid": "346f8c89-d2f4-5b75-9900-661d8f8c4447", 00:18:25.896 "is_configured": true, 00:18:25.896 "data_offset": 2048, 00:18:25.896 "data_size": 63488 00:18:25.896 }, 00:18:25.896 { 00:18:25.896 "name": "BaseBdev2", 00:18:25.896 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:25.896 "is_configured": true, 00:18:25.896 "data_offset": 2048, 00:18:25.896 "data_size": 63488 00:18:25.896 } 00:18:25.896 ] 00:18:25.896 }' 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.896 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:26.462 "name": "raid_bdev1", 00:18:26.462 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:26.462 "strip_size_kb": 0, 00:18:26.462 "state": "online", 00:18:26.462 "raid_level": "raid1", 00:18:26.462 "superblock": true, 00:18:26.462 "num_base_bdevs": 2, 00:18:26.462 "num_base_bdevs_discovered": 2, 00:18:26.462 "num_base_bdevs_operational": 2, 00:18:26.462 "base_bdevs_list": [ 00:18:26.462 { 00:18:26.462 "name": "spare", 00:18:26.462 "uuid": "346f8c89-d2f4-5b75-9900-661d8f8c4447", 00:18:26.462 "is_configured": true, 00:18:26.462 "data_offset": 2048, 00:18:26.462 "data_size": 63488 00:18:26.462 }, 00:18:26.462 { 00:18:26.462 "name": "BaseBdev2", 00:18:26.462 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:26.462 "is_configured": true, 00:18:26.462 "data_offset": 2048, 00:18:26.462 "data_size": 63488 00:18:26.462 } 00:18:26.462 ] 00:18:26.462 }' 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:26.462 09:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.720 [2024-11-06 09:11:03.086651] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.720 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.721 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.721 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.721 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.721 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.721 "name": "raid_bdev1", 00:18:26.721 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:26.721 "strip_size_kb": 0, 00:18:26.721 "state": "online", 00:18:26.721 "raid_level": "raid1", 00:18:26.721 "superblock": true, 00:18:26.721 "num_base_bdevs": 2, 00:18:26.721 "num_base_bdevs_discovered": 1, 00:18:26.721 "num_base_bdevs_operational": 1, 00:18:26.721 "base_bdevs_list": [ 00:18:26.721 { 00:18:26.721 "name": null, 00:18:26.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.721 "is_configured": false, 00:18:26.721 "data_offset": 0, 00:18:26.721 "data_size": 63488 00:18:26.721 }, 00:18:26.721 { 00:18:26.721 "name": "BaseBdev2", 00:18:26.721 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:26.721 "is_configured": true, 00:18:26.721 "data_offset": 2048, 00:18:26.721 "data_size": 63488 00:18:26.721 } 00:18:26.721 ] 00:18:26.721 }' 00:18:26.721 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.721 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.288 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:27.288 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.288 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.288 [2024-11-06 09:11:03.602839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.288 [2024-11-06 09:11:03.603213] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:27.288 [2024-11-06 09:11:03.603390] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:27.288 [2024-11-06 09:11:03.603663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.288 [2024-11-06 09:11:03.619025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:18:27.288 09:11:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.288 09:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:27.288 [2024-11-06 09:11:03.621459] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.221 "name": "raid_bdev1", 00:18:28.221 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:28.221 "strip_size_kb": 0, 00:18:28.221 "state": "online", 00:18:28.221 "raid_level": "raid1", 00:18:28.221 "superblock": true, 00:18:28.221 "num_base_bdevs": 2, 00:18:28.221 "num_base_bdevs_discovered": 2, 00:18:28.221 "num_base_bdevs_operational": 2, 00:18:28.221 "process": { 00:18:28.221 "type": "rebuild", 00:18:28.221 "target": "spare", 00:18:28.221 "progress": { 00:18:28.221 "blocks": 20480, 00:18:28.221 "percent": 32 00:18:28.221 } 00:18:28.221 }, 00:18:28.221 "base_bdevs_list": [ 00:18:28.221 { 00:18:28.221 "name": "spare", 00:18:28.221 "uuid": "346f8c89-d2f4-5b75-9900-661d8f8c4447", 00:18:28.221 "is_configured": true, 00:18:28.221 "data_offset": 2048, 00:18:28.221 "data_size": 63488 00:18:28.221 }, 00:18:28.221 { 00:18:28.221 "name": "BaseBdev2", 00:18:28.221 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:28.221 "is_configured": true, 00:18:28.221 "data_offset": 2048, 00:18:28.221 "data_size": 63488 00:18:28.221 } 00:18:28.221 ] 00:18:28.221 }' 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:28.221 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.503 [2024-11-06 09:11:04.799071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.503 [2024-11-06 09:11:04.830247] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:28.503 [2024-11-06 09:11:04.830375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.503 [2024-11-06 09:11:04.830400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:28.503 [2024-11-06 09:11:04.830414] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.503 "name": "raid_bdev1", 00:18:28.503 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:28.503 "strip_size_kb": 0, 00:18:28.503 "state": "online", 00:18:28.503 "raid_level": "raid1", 00:18:28.503 "superblock": true, 00:18:28.503 "num_base_bdevs": 2, 00:18:28.503 "num_base_bdevs_discovered": 1, 00:18:28.503 "num_base_bdevs_operational": 1, 00:18:28.503 "base_bdevs_list": [ 00:18:28.503 { 00:18:28.503 "name": null, 00:18:28.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.503 "is_configured": false, 00:18:28.503 "data_offset": 0, 00:18:28.503 "data_size": 63488 00:18:28.503 }, 00:18:28.503 { 00:18:28.503 "name": "BaseBdev2", 00:18:28.503 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:28.503 "is_configured": true, 00:18:28.503 "data_offset": 2048, 00:18:28.503 "data_size": 63488 00:18:28.503 } 00:18:28.503 ] 00:18:28.503 }' 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.503 09:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.072 09:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:29.072 09:11:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.072 09:11:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.072 [2024-11-06 09:11:05.390294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:29.072 [2024-11-06 09:11:05.390552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.072 [2024-11-06 09:11:05.390592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:29.072 [2024-11-06 09:11:05.390612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.072 [2024-11-06 09:11:05.391249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.072 [2024-11-06 09:11:05.391287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:29.072 [2024-11-06 09:11:05.391408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:29.072 [2024-11-06 09:11:05.391433] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:29.072 [2024-11-06 09:11:05.391447] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:29.072 [2024-11-06 09:11:05.391488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.072 spare 00:18:29.072 [2024-11-06 09:11:05.407162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:18:29.072 09:11:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.072 09:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:29.072 [2024-11-06 09:11:05.409662] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.005 "name": "raid_bdev1", 00:18:30.005 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:30.005 "strip_size_kb": 0, 00:18:30.005 "state": "online", 00:18:30.005 "raid_level": "raid1", 00:18:30.005 "superblock": true, 00:18:30.005 "num_base_bdevs": 2, 00:18:30.005 "num_base_bdevs_discovered": 2, 00:18:30.005 "num_base_bdevs_operational": 2, 00:18:30.005 "process": { 00:18:30.005 "type": "rebuild", 00:18:30.005 "target": "spare", 00:18:30.005 "progress": { 00:18:30.005 "blocks": 20480, 00:18:30.005 "percent": 32 00:18:30.005 } 00:18:30.005 }, 00:18:30.005 "base_bdevs_list": [ 00:18:30.005 { 00:18:30.005 "name": "spare", 00:18:30.005 "uuid": "346f8c89-d2f4-5b75-9900-661d8f8c4447", 00:18:30.005 "is_configured": true, 00:18:30.005 "data_offset": 2048, 00:18:30.005 "data_size": 63488 00:18:30.005 }, 00:18:30.005 { 00:18:30.005 "name": "BaseBdev2", 00:18:30.005 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:30.005 "is_configured": true, 00:18:30.005 "data_offset": 2048, 00:18:30.005 "data_size": 63488 00:18:30.005 } 00:18:30.005 ] 00:18:30.005 }' 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:30.005 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.263 [2024-11-06 09:11:06.575398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.263 [2024-11-06 09:11:06.618585] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:30.263 [2024-11-06 09:11:06.618694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.263 [2024-11-06 09:11:06.618724] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.263 [2024-11-06 09:11:06.618736] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.263 09:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.264 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.264 09:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.264 09:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.264 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.264 "name": "raid_bdev1", 00:18:30.264 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:30.264 "strip_size_kb": 0, 00:18:30.264 "state": "online", 00:18:30.264 "raid_level": "raid1", 00:18:30.264 "superblock": true, 00:18:30.264 "num_base_bdevs": 2, 00:18:30.264 "num_base_bdevs_discovered": 1, 00:18:30.264 "num_base_bdevs_operational": 1, 00:18:30.264 "base_bdevs_list": [ 00:18:30.264 { 00:18:30.264 "name": null, 00:18:30.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.264 "is_configured": false, 00:18:30.264 "data_offset": 0, 00:18:30.264 "data_size": 63488 00:18:30.264 }, 00:18:30.264 { 00:18:30.264 "name": "BaseBdev2", 00:18:30.264 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:30.264 "is_configured": true, 00:18:30.264 "data_offset": 2048, 00:18:30.264 "data_size": 63488 00:18:30.264 } 00:18:30.264 ] 00:18:30.264 }' 00:18:30.264 09:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.264 09:11:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.863 "name": "raid_bdev1", 00:18:30.863 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:30.863 "strip_size_kb": 0, 00:18:30.863 "state": "online", 00:18:30.863 "raid_level": "raid1", 00:18:30.863 "superblock": true, 00:18:30.863 "num_base_bdevs": 2, 00:18:30.863 "num_base_bdevs_discovered": 1, 00:18:30.863 "num_base_bdevs_operational": 1, 00:18:30.863 "base_bdevs_list": [ 00:18:30.863 { 00:18:30.863 "name": null, 00:18:30.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.863 "is_configured": false, 00:18:30.863 "data_offset": 0, 00:18:30.863 "data_size": 63488 00:18:30.863 }, 00:18:30.863 { 00:18:30.863 "name": "BaseBdev2", 00:18:30.863 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:30.863 "is_configured": true, 00:18:30.863 "data_offset": 2048, 00:18:30.863 "data_size": 63488 00:18:30.863 } 00:18:30.863 ] 00:18:30.863 }' 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.863 [2024-11-06 09:11:07.326608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:30.863 [2024-11-06 09:11:07.326837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.863 [2024-11-06 09:11:07.326990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:30.863 [2024-11-06 09:11:07.327026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.863 [2024-11-06 09:11:07.327590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.863 [2024-11-06 09:11:07.327627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:30.863 [2024-11-06 09:11:07.327752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:30.863 [2024-11-06 09:11:07.327780] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:30.863 [2024-11-06 09:11:07.327795] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:30.863 [2024-11-06 09:11:07.327808] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:30.863 BaseBdev1 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.863 09:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:32.236 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.236 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.236 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.237 "name": "raid_bdev1", 00:18:32.237 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:32.237 "strip_size_kb": 0, 00:18:32.237 "state": "online", 00:18:32.237 "raid_level": "raid1", 00:18:32.237 "superblock": true, 00:18:32.237 "num_base_bdevs": 2, 00:18:32.237 "num_base_bdevs_discovered": 1, 00:18:32.237 "num_base_bdevs_operational": 1, 00:18:32.237 "base_bdevs_list": [ 00:18:32.237 { 00:18:32.237 "name": null, 00:18:32.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.237 "is_configured": false, 00:18:32.237 "data_offset": 0, 00:18:32.237 "data_size": 63488 00:18:32.237 }, 00:18:32.237 { 00:18:32.237 "name": "BaseBdev2", 00:18:32.237 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:32.237 "is_configured": true, 00:18:32.237 "data_offset": 2048, 00:18:32.237 "data_size": 63488 00:18:32.237 } 00:18:32.237 ] 00:18:32.237 }' 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.237 09:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.495 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.495 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.495 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.495 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.495 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.495 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.495 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.495 09:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.495 09:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.495 09:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.495 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.495 "name": "raid_bdev1", 00:18:32.495 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:32.495 "strip_size_kb": 0, 00:18:32.495 "state": "online", 00:18:32.495 "raid_level": "raid1", 00:18:32.495 "superblock": true, 00:18:32.495 "num_base_bdevs": 2, 00:18:32.495 "num_base_bdevs_discovered": 1, 00:18:32.495 "num_base_bdevs_operational": 1, 00:18:32.495 "base_bdevs_list": [ 00:18:32.496 { 00:18:32.496 "name": null, 00:18:32.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.496 "is_configured": false, 00:18:32.496 "data_offset": 0, 00:18:32.496 "data_size": 63488 00:18:32.496 }, 00:18:32.496 { 00:18:32.496 "name": "BaseBdev2", 00:18:32.496 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:32.496 "is_configured": true, 00:18:32.496 "data_offset": 2048, 00:18:32.496 "data_size": 63488 00:18:32.496 } 00:18:32.496 ] 00:18:32.496 }' 00:18:32.496 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.496 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.496 09:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.496 09:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.496 09:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:32.496 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:32.496 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:32.496 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:32.496 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.496 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:32.496 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.496 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:32.496 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.496 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.753 [2024-11-06 09:11:09.035208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.753 [2024-11-06 09:11:09.035560] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:32.753 [2024-11-06 09:11:09.035592] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:32.753 request: 00:18:32.753 { 00:18:32.753 "base_bdev": "BaseBdev1", 00:18:32.753 "raid_bdev": "raid_bdev1", 00:18:32.753 "method": "bdev_raid_add_base_bdev", 00:18:32.753 "req_id": 1 00:18:32.753 } 00:18:32.753 Got JSON-RPC error response 00:18:32.753 response: 00:18:32.753 { 00:18:32.753 "code": -22, 00:18:32.753 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:32.753 } 00:18:32.753 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:32.753 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:32.753 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.753 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.753 09:11:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.753 09:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.696 "name": "raid_bdev1", 00:18:33.696 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:33.696 "strip_size_kb": 0, 00:18:33.696 "state": "online", 00:18:33.696 "raid_level": "raid1", 00:18:33.696 "superblock": true, 00:18:33.696 "num_base_bdevs": 2, 00:18:33.696 "num_base_bdevs_discovered": 1, 00:18:33.696 "num_base_bdevs_operational": 1, 00:18:33.696 "base_bdevs_list": [ 00:18:33.696 { 00:18:33.696 "name": null, 00:18:33.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.696 "is_configured": false, 00:18:33.696 "data_offset": 0, 00:18:33.696 "data_size": 63488 00:18:33.696 }, 00:18:33.696 { 00:18:33.696 "name": "BaseBdev2", 00:18:33.696 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:33.696 "is_configured": true, 00:18:33.696 "data_offset": 2048, 00:18:33.696 "data_size": 63488 00:18:33.696 } 00:18:33.696 ] 00:18:33.696 }' 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.696 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.261 "name": "raid_bdev1", 00:18:34.261 "uuid": "4f9ff479-6a8b-4a38-8f58-370efb7f126b", 00:18:34.261 "strip_size_kb": 0, 00:18:34.261 "state": "online", 00:18:34.261 "raid_level": "raid1", 00:18:34.261 "superblock": true, 00:18:34.261 "num_base_bdevs": 2, 00:18:34.261 "num_base_bdevs_discovered": 1, 00:18:34.261 "num_base_bdevs_operational": 1, 00:18:34.261 "base_bdevs_list": [ 00:18:34.261 { 00:18:34.261 "name": null, 00:18:34.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.261 "is_configured": false, 00:18:34.261 "data_offset": 0, 00:18:34.261 "data_size": 63488 00:18:34.261 }, 00:18:34.261 { 00:18:34.261 "name": "BaseBdev2", 00:18:34.261 "uuid": "63b04390-eb65-5000-8e75-a6256b13ecf9", 00:18:34.261 "is_configured": true, 00:18:34.261 "data_offset": 2048, 00:18:34.261 "data_size": 63488 00:18:34.261 } 00:18:34.261 ] 00:18:34.261 }' 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75863 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75863 ']' 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 75863 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75863 00:18:34.261 killing process with pid 75863 00:18:34.261 Received shutdown signal, test time was about 60.000000 seconds 00:18:34.261 00:18:34.261 Latency(us) 00:18:34.261 [2024-11-06T09:11:10.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.261 [2024-11-06T09:11:10.796Z] =================================================================================================================== 00:18:34.261 [2024-11-06T09:11:10.796Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75863' 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 75863 00:18:34.261 [2024-11-06 09:11:10.777055] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:34.261 09:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 75863 00:18:34.261 [2024-11-06 09:11:10.777209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.261 [2024-11-06 09:11:10.777274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.261 [2024-11-06 09:11:10.777294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:34.519 [2024-11-06 09:11:11.042714] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:35.893 00:18:35.893 real 0m26.889s 00:18:35.893 user 0m33.730s 00:18:35.893 sys 0m3.787s 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:35.893 ************************************ 00:18:35.893 END TEST raid_rebuild_test_sb 00:18:35.893 ************************************ 00:18:35.893 09:11:12 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:18:35.893 09:11:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:35.893 09:11:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:35.893 09:11:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:35.893 ************************************ 00:18:35.893 START TEST raid_rebuild_test_io 00:18:35.893 ************************************ 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:35.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76629 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76629 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 76629 ']' 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.893 09:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:35.894 09:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.894 09:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:35.894 09:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.894 [2024-11-06 09:11:12.230373] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:18:35.894 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:35.894 Zero copy mechanism will not be used. 00:18:35.894 [2024-11-06 09:11:12.230844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76629 ] 00:18:35.894 [2024-11-06 09:11:12.408224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.151 [2024-11-06 09:11:12.539709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.409 [2024-11-06 09:11:12.742093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:36.409 [2024-11-06 09:11:12.742172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:36.667 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:36.667 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:18:36.667 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:36.667 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:36.667 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.667 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 BaseBdev1_malloc 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 [2024-11-06 09:11:13.214672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:36.925 [2024-11-06 09:11:13.214915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.925 [2024-11-06 09:11:13.214961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:36.925 [2024-11-06 09:11:13.214981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.925 [2024-11-06 09:11:13.217708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.925 [2024-11-06 09:11:13.217760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:36.925 BaseBdev1 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 BaseBdev2_malloc 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 [2024-11-06 09:11:13.266372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:36.925 [2024-11-06 09:11:13.266451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.925 [2024-11-06 09:11:13.266480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:36.925 [2024-11-06 09:11:13.266512] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.925 [2024-11-06 09:11:13.269251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.925 [2024-11-06 09:11:13.269300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:36.925 BaseBdev2 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 spare_malloc 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 spare_delay 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 [2024-11-06 09:11:13.339262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:36.925 [2024-11-06 09:11:13.339342] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.925 [2024-11-06 09:11:13.339375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:36.925 [2024-11-06 09:11:13.339393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.925 [2024-11-06 09:11:13.342179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.925 [2024-11-06 09:11:13.342231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:36.925 spare 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 [2024-11-06 09:11:13.347321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:36.925 [2024-11-06 09:11:13.349832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:36.925 [2024-11-06 09:11:13.350079] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:36.925 [2024-11-06 09:11:13.350111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:36.925 [2024-11-06 09:11:13.350447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:36.925 [2024-11-06 09:11:13.350670] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:36.925 [2024-11-06 09:11:13.350697] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:36.925 [2024-11-06 09:11:13.350933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.925 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.925 "name": "raid_bdev1", 00:18:36.925 "uuid": "f53a90a0-63a8-4cb3-9734-a6de10207db1", 00:18:36.925 "strip_size_kb": 0, 00:18:36.925 "state": "online", 00:18:36.925 "raid_level": "raid1", 00:18:36.926 "superblock": false, 00:18:36.926 "num_base_bdevs": 2, 00:18:36.926 "num_base_bdevs_discovered": 2, 00:18:36.926 "num_base_bdevs_operational": 2, 00:18:36.926 "base_bdevs_list": [ 00:18:36.926 { 00:18:36.926 "name": "BaseBdev1", 00:18:36.926 "uuid": "4a84cf1d-9845-59d8-aed9-c84dfcbf7c88", 00:18:36.926 "is_configured": true, 00:18:36.926 "data_offset": 0, 00:18:36.926 "data_size": 65536 00:18:36.926 }, 00:18:36.926 { 00:18:36.926 "name": "BaseBdev2", 00:18:36.926 "uuid": "91611d46-b594-52f7-8282-a09033868d38", 00:18:36.926 "is_configured": true, 00:18:36.926 "data_offset": 0, 00:18:36.926 "data_size": 65536 00:18:36.926 } 00:18:36.926 ] 00:18:36.926 }' 00:18:36.926 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.926 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.499 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:37.499 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:37.499 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.499 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.499 [2024-11-06 09:11:13.879808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.499 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.499 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:37.499 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.499 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:37.499 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.499 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.499 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.499 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.500 [2024-11-06 09:11:13.983482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.500 09:11:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.500 09:11:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.759 09:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.760 "name": "raid_bdev1", 00:18:37.760 "uuid": "f53a90a0-63a8-4cb3-9734-a6de10207db1", 00:18:37.760 "strip_size_kb": 0, 00:18:37.760 "state": "online", 00:18:37.760 "raid_level": "raid1", 00:18:37.760 "superblock": false, 00:18:37.760 "num_base_bdevs": 2, 00:18:37.760 "num_base_bdevs_discovered": 1, 00:18:37.760 "num_base_bdevs_operational": 1, 00:18:37.760 "base_bdevs_list": [ 00:18:37.760 { 00:18:37.760 "name": null, 00:18:37.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.760 "is_configured": false, 00:18:37.760 "data_offset": 0, 00:18:37.760 "data_size": 65536 00:18:37.760 }, 00:18:37.760 { 00:18:37.760 "name": "BaseBdev2", 00:18:37.760 "uuid": "91611d46-b594-52f7-8282-a09033868d38", 00:18:37.760 "is_configured": true, 00:18:37.760 "data_offset": 0, 00:18:37.760 "data_size": 65536 00:18:37.760 } 00:18:37.760 ] 00:18:37.760 }' 00:18:37.760 09:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.760 09:11:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.760 [2024-11-06 09:11:14.115519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:37.760 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:37.760 Zero copy mechanism will not be used. 00:18:37.760 Running I/O for 60 seconds... 00:18:38.018 09:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:38.018 09:11:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.018 09:11:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.018 [2024-11-06 09:11:14.515605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:38.275 09:11:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.275 09:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:38.275 [2024-11-06 09:11:14.577118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:38.275 [2024-11-06 09:11:14.579595] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:38.275 [2024-11-06 09:11:14.690102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:38.275 [2024-11-06 09:11:14.690758] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:38.533 [2024-11-06 09:11:14.902739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:38.533 [2024-11-06 09:11:14.903167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:38.791 191.00 IOPS, 573.00 MiB/s [2024-11-06T09:11:15.326Z] [2024-11-06 09:11:15.273213] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:39.124 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.124 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.125 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.125 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.125 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.125 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.125 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.125 09:11:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.125 09:11:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.125 09:11:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.382 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.382 "name": "raid_bdev1", 00:18:39.382 "uuid": "f53a90a0-63a8-4cb3-9734-a6de10207db1", 00:18:39.382 "strip_size_kb": 0, 00:18:39.382 "state": "online", 00:18:39.382 "raid_level": "raid1", 00:18:39.382 "superblock": false, 00:18:39.382 "num_base_bdevs": 2, 00:18:39.382 "num_base_bdevs_discovered": 2, 00:18:39.382 "num_base_bdevs_operational": 2, 00:18:39.382 "process": { 00:18:39.382 "type": "rebuild", 00:18:39.382 "target": "spare", 00:18:39.382 "progress": { 00:18:39.382 "blocks": 10240, 00:18:39.382 "percent": 15 00:18:39.382 } 00:18:39.382 }, 00:18:39.382 "base_bdevs_list": [ 00:18:39.382 { 00:18:39.382 "name": "spare", 00:18:39.382 "uuid": "42f5d7d1-1369-55d4-a5cc-b891869e129e", 00:18:39.382 "is_configured": true, 00:18:39.382 "data_offset": 0, 00:18:39.382 "data_size": 65536 00:18:39.382 }, 00:18:39.382 { 00:18:39.382 "name": "BaseBdev2", 00:18:39.382 "uuid": "91611d46-b594-52f7-8282-a09033868d38", 00:18:39.382 "is_configured": true, 00:18:39.382 "data_offset": 0, 00:18:39.382 "data_size": 65536 00:18:39.382 } 00:18:39.382 ] 00:18:39.382 }' 00:18:39.382 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.382 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.382 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.382 [2024-11-06 09:11:15.692903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:39.382 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.382 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:39.382 09:11:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.382 09:11:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.382 [2024-11-06 09:11:15.736995] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.382 [2024-11-06 09:11:15.812074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:39.640 [2024-11-06 09:11:15.922074] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:39.640 [2024-11-06 09:11:15.924993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.640 [2024-11-06 09:11:15.925067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.640 [2024-11-06 09:11:15.925084] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:39.640 [2024-11-06 09:11:15.960854] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.640 09:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.640 09:11:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.640 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.640 "name": "raid_bdev1", 00:18:39.640 "uuid": "f53a90a0-63a8-4cb3-9734-a6de10207db1", 00:18:39.640 "strip_size_kb": 0, 00:18:39.640 "state": "online", 00:18:39.640 "raid_level": "raid1", 00:18:39.640 "superblock": false, 00:18:39.640 "num_base_bdevs": 2, 00:18:39.640 "num_base_bdevs_discovered": 1, 00:18:39.640 "num_base_bdevs_operational": 1, 00:18:39.640 "base_bdevs_list": [ 00:18:39.640 { 00:18:39.640 "name": null, 00:18:39.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.640 "is_configured": false, 00:18:39.640 "data_offset": 0, 00:18:39.640 "data_size": 65536 00:18:39.640 }, 00:18:39.640 { 00:18:39.640 "name": "BaseBdev2", 00:18:39.640 "uuid": "91611d46-b594-52f7-8282-a09033868d38", 00:18:39.640 "is_configured": true, 00:18:39.640 "data_offset": 0, 00:18:39.640 "data_size": 65536 00:18:39.640 } 00:18:39.640 ] 00:18:39.640 }' 00:18:39.640 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.640 09:11:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.207 147.00 IOPS, 441.00 MiB/s [2024-11-06T09:11:16.742Z] 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.207 "name": "raid_bdev1", 00:18:40.207 "uuid": "f53a90a0-63a8-4cb3-9734-a6de10207db1", 00:18:40.207 "strip_size_kb": 0, 00:18:40.207 "state": "online", 00:18:40.207 "raid_level": "raid1", 00:18:40.207 "superblock": false, 00:18:40.207 "num_base_bdevs": 2, 00:18:40.207 "num_base_bdevs_discovered": 1, 00:18:40.207 "num_base_bdevs_operational": 1, 00:18:40.207 "base_bdevs_list": [ 00:18:40.207 { 00:18:40.207 "name": null, 00:18:40.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.207 "is_configured": false, 00:18:40.207 "data_offset": 0, 00:18:40.207 "data_size": 65536 00:18:40.207 }, 00:18:40.207 { 00:18:40.207 "name": "BaseBdev2", 00:18:40.207 "uuid": "91611d46-b594-52f7-8282-a09033868d38", 00:18:40.207 "is_configured": true, 00:18:40.207 "data_offset": 0, 00:18:40.207 "data_size": 65536 00:18:40.207 } 00:18:40.207 ] 00:18:40.207 }' 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.207 [2024-11-06 09:11:16.575177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.207 09:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:40.207 [2024-11-06 09:11:16.687341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:40.207 [2024-11-06 09:11:16.689921] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:40.465 [2024-11-06 09:11:16.801424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:40.465 [2024-11-06 09:11:16.802140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:40.465 [2024-11-06 09:11:16.939608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:40.465 [2024-11-06 09:11:16.940215] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:40.723 159.00 IOPS, 477.00 MiB/s [2024-11-06T09:11:17.258Z] [2024-11-06 09:11:17.217731] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:40.980 [2024-11-06 09:11:17.420769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:40.980 [2024-11-06 09:11:17.421239] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.238 "name": "raid_bdev1", 00:18:41.238 "uuid": "f53a90a0-63a8-4cb3-9734-a6de10207db1", 00:18:41.238 "strip_size_kb": 0, 00:18:41.238 "state": "online", 00:18:41.238 "raid_level": "raid1", 00:18:41.238 "superblock": false, 00:18:41.238 "num_base_bdevs": 2, 00:18:41.238 "num_base_bdevs_discovered": 2, 00:18:41.238 "num_base_bdevs_operational": 2, 00:18:41.238 "process": { 00:18:41.238 "type": "rebuild", 00:18:41.238 "target": "spare", 00:18:41.238 "progress": { 00:18:41.238 "blocks": 12288, 00:18:41.238 "percent": 18 00:18:41.238 } 00:18:41.238 }, 00:18:41.238 "base_bdevs_list": [ 00:18:41.238 { 00:18:41.238 "name": "spare", 00:18:41.238 "uuid": "42f5d7d1-1369-55d4-a5cc-b891869e129e", 00:18:41.238 "is_configured": true, 00:18:41.238 "data_offset": 0, 00:18:41.238 "data_size": 65536 00:18:41.238 }, 00:18:41.238 { 00:18:41.238 "name": "BaseBdev2", 00:18:41.238 "uuid": "91611d46-b594-52f7-8282-a09033868d38", 00:18:41.238 "is_configured": true, 00:18:41.238 "data_offset": 0, 00:18:41.238 "data_size": 65536 00:18:41.238 } 00:18:41.238 ] 00:18:41.238 }' 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.238 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.495 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.495 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:41.495 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:41.495 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:41.495 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=437 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.496 "name": "raid_bdev1", 00:18:41.496 "uuid": "f53a90a0-63a8-4cb3-9734-a6de10207db1", 00:18:41.496 "strip_size_kb": 0, 00:18:41.496 "state": "online", 00:18:41.496 "raid_level": "raid1", 00:18:41.496 "superblock": false, 00:18:41.496 "num_base_bdevs": 2, 00:18:41.496 "num_base_bdevs_discovered": 2, 00:18:41.496 "num_base_bdevs_operational": 2, 00:18:41.496 "process": { 00:18:41.496 "type": "rebuild", 00:18:41.496 "target": "spare", 00:18:41.496 "progress": { 00:18:41.496 "blocks": 14336, 00:18:41.496 "percent": 21 00:18:41.496 } 00:18:41.496 }, 00:18:41.496 "base_bdevs_list": [ 00:18:41.496 { 00:18:41.496 "name": "spare", 00:18:41.496 "uuid": "42f5d7d1-1369-55d4-a5cc-b891869e129e", 00:18:41.496 "is_configured": true, 00:18:41.496 "data_offset": 0, 00:18:41.496 "data_size": 65536 00:18:41.496 }, 00:18:41.496 { 00:18:41.496 "name": "BaseBdev2", 00:18:41.496 "uuid": "91611d46-b594-52f7-8282-a09033868d38", 00:18:41.496 "is_configured": true, 00:18:41.496 "data_offset": 0, 00:18:41.496 "data_size": 65536 00:18:41.496 } 00:18:41.496 ] 00:18:41.496 }' 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.496 09:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.802 [2024-11-06 09:11:18.076638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:41.802 131.75 IOPS, 395.25 MiB/s [2024-11-06T09:11:18.337Z] [2024-11-06 09:11:18.306471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:42.371 [2024-11-06 09:11:18.877783] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:42.630 09:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:42.630 09:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:42.630 09:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:42.630 09:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:42.630 09:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:42.630 09:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:42.630 09:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.630 09:11:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.630 09:11:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:42.630 09:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.630 09:11:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.630 [2024-11-06 09:11:18.987297] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:42.630 09:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:42.630 "name": "raid_bdev1", 00:18:42.630 "uuid": "f53a90a0-63a8-4cb3-9734-a6de10207db1", 00:18:42.630 "strip_size_kb": 0, 00:18:42.630 "state": "online", 00:18:42.630 "raid_level": "raid1", 00:18:42.630 "superblock": false, 00:18:42.630 "num_base_bdevs": 2, 00:18:42.630 "num_base_bdevs_discovered": 2, 00:18:42.630 "num_base_bdevs_operational": 2, 00:18:42.630 "process": { 00:18:42.630 "type": "rebuild", 00:18:42.630 "target": "spare", 00:18:42.630 "progress": { 00:18:42.630 "blocks": 32768, 00:18:42.630 "percent": 50 00:18:42.630 } 00:18:42.630 }, 00:18:42.630 "base_bdevs_list": [ 00:18:42.630 { 00:18:42.630 "name": "spare", 00:18:42.630 "uuid": "42f5d7d1-1369-55d4-a5cc-b891869e129e", 00:18:42.630 "is_configured": true, 00:18:42.630 "data_offset": 0, 00:18:42.630 "data_size": 65536 00:18:42.630 }, 00:18:42.630 { 00:18:42.630 "name": "BaseBdev2", 00:18:42.630 "uuid": "91611d46-b594-52f7-8282-a09033868d38", 00:18:42.630 "is_configured": true, 00:18:42.630 "data_offset": 0, 00:18:42.630 "data_size": 65536 00:18:42.630 } 00:18:42.630 ] 00:18:42.630 }' 00:18:42.630 09:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:42.630 09:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:42.630 09:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:42.630 09:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:42.630 09:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.197 115.80 IOPS, 347.40 MiB/s [2024-11-06T09:11:19.732Z] [2024-11-06 09:11:19.649020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:18:43.456 [2024-11-06 09:11:19.758775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.714 103.33 IOPS, 310.00 MiB/s [2024-11-06T09:11:20.249Z] 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.714 "name": "raid_bdev1", 00:18:43.714 "uuid": "f53a90a0-63a8-4cb3-9734-a6de10207db1", 00:18:43.714 "strip_size_kb": 0, 00:18:43.714 "state": "online", 00:18:43.714 "raid_level": "raid1", 00:18:43.714 "superblock": false, 00:18:43.714 "num_base_bdevs": 2, 00:18:43.714 "num_base_bdevs_discovered": 2, 00:18:43.714 "num_base_bdevs_operational": 2, 00:18:43.714 "process": { 00:18:43.714 "type": "rebuild", 00:18:43.714 "target": "spare", 00:18:43.714 "progress": { 00:18:43.714 "blocks": 51200, 00:18:43.714 "percent": 78 00:18:43.714 } 00:18:43.714 }, 00:18:43.714 "base_bdevs_list": [ 00:18:43.714 { 00:18:43.714 "name": "spare", 00:18:43.714 "uuid": "42f5d7d1-1369-55d4-a5cc-b891869e129e", 00:18:43.714 "is_configured": true, 00:18:43.714 "data_offset": 0, 00:18:43.714 "data_size": 65536 00:18:43.714 }, 00:18:43.714 { 00:18:43.714 "name": "BaseBdev2", 00:18:43.714 "uuid": "91611d46-b594-52f7-8282-a09033868d38", 00:18:43.714 "is_configured": true, 00:18:43.714 "data_offset": 0, 00:18:43.714 "data_size": 65536 00:18:43.714 } 00:18:43.714 ] 00:18:43.714 }' 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.714 [2024-11-06 09:11:20.203175] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.714 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.973 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.973 09:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:43.973 [2024-11-06 09:11:20.417373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:18:44.231 [2024-11-06 09:11:20.637664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:18:44.488 [2024-11-06 09:11:20.989949] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:44.746 [2024-11-06 09:11:21.089913] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:44.746 [2024-11-06 09:11:21.092698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.004 93.14 IOPS, 279.43 MiB/s [2024-11-06T09:11:21.539Z] 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.004 "name": "raid_bdev1", 00:18:45.004 "uuid": "f53a90a0-63a8-4cb3-9734-a6de10207db1", 00:18:45.004 "strip_size_kb": 0, 00:18:45.004 "state": "online", 00:18:45.004 "raid_level": "raid1", 00:18:45.004 "superblock": false, 00:18:45.004 "num_base_bdevs": 2, 00:18:45.004 "num_base_bdevs_discovered": 2, 00:18:45.004 "num_base_bdevs_operational": 2, 00:18:45.004 "base_bdevs_list": [ 00:18:45.004 { 00:18:45.004 "name": "spare", 00:18:45.004 "uuid": "42f5d7d1-1369-55d4-a5cc-b891869e129e", 00:18:45.004 "is_configured": true, 00:18:45.004 "data_offset": 0, 00:18:45.004 "data_size": 65536 00:18:45.004 }, 00:18:45.004 { 00:18:45.004 "name": "BaseBdev2", 00:18:45.004 "uuid": "91611d46-b594-52f7-8282-a09033868d38", 00:18:45.004 "is_configured": true, 00:18:45.004 "data_offset": 0, 00:18:45.004 "data_size": 65536 00:18:45.004 } 00:18:45.004 ] 00:18:45.004 }' 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.004 "name": "raid_bdev1", 00:18:45.004 "uuid": "f53a90a0-63a8-4cb3-9734-a6de10207db1", 00:18:45.004 "strip_size_kb": 0, 00:18:45.004 "state": "online", 00:18:45.004 "raid_level": "raid1", 00:18:45.004 "superblock": false, 00:18:45.004 "num_base_bdevs": 2, 00:18:45.004 "num_base_bdevs_discovered": 2, 00:18:45.004 "num_base_bdevs_operational": 2, 00:18:45.004 "base_bdevs_list": [ 00:18:45.004 { 00:18:45.004 "name": "spare", 00:18:45.004 "uuid": "42f5d7d1-1369-55d4-a5cc-b891869e129e", 00:18:45.004 "is_configured": true, 00:18:45.004 "data_offset": 0, 00:18:45.004 "data_size": 65536 00:18:45.004 }, 00:18:45.004 { 00:18:45.004 "name": "BaseBdev2", 00:18:45.004 "uuid": "91611d46-b594-52f7-8282-a09033868d38", 00:18:45.004 "is_configured": true, 00:18:45.004 "data_offset": 0, 00:18:45.004 "data_size": 65536 00:18:45.004 } 00:18:45.004 ] 00:18:45.004 }' 00:18:45.004 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.262 "name": "raid_bdev1", 00:18:45.262 "uuid": "f53a90a0-63a8-4cb3-9734-a6de10207db1", 00:18:45.262 "strip_size_kb": 0, 00:18:45.262 "state": "online", 00:18:45.262 "raid_level": "raid1", 00:18:45.262 "superblock": false, 00:18:45.262 "num_base_bdevs": 2, 00:18:45.262 "num_base_bdevs_discovered": 2, 00:18:45.262 "num_base_bdevs_operational": 2, 00:18:45.262 "base_bdevs_list": [ 00:18:45.262 { 00:18:45.262 "name": "spare", 00:18:45.262 "uuid": "42f5d7d1-1369-55d4-a5cc-b891869e129e", 00:18:45.262 "is_configured": true, 00:18:45.262 "data_offset": 0, 00:18:45.262 "data_size": 65536 00:18:45.262 }, 00:18:45.262 { 00:18:45.262 "name": "BaseBdev2", 00:18:45.262 "uuid": "91611d46-b594-52f7-8282-a09033868d38", 00:18:45.262 "is_configured": true, 00:18:45.262 "data_offset": 0, 00:18:45.262 "data_size": 65536 00:18:45.262 } 00:18:45.262 ] 00:18:45.262 }' 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.262 09:11:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.828 [2024-11-06 09:11:22.129198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:45.828 [2024-11-06 09:11:22.129238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:45.828 85.50 IOPS, 256.50 MiB/s 00:18:45.828 Latency(us) 00:18:45.828 [2024-11-06T09:11:22.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.828 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:45.828 raid_bdev1 : 8.08 84.75 254.24 0.00 0.00 15922.32 281.13 141081.13 00:18:45.828 [2024-11-06T09:11:22.363Z] =================================================================================================================== 00:18:45.828 [2024-11-06T09:11:22.363Z] Total : 84.75 254.24 0.00 0.00 15922.32 281.13 141081.13 00:18:45.828 [2024-11-06 09:11:22.221535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.828 [2024-11-06 09:11:22.221619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:45.828 [2024-11-06 09:11:22.221732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:45.828 [2024-11-06 09:11:22.221757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:45.828 { 00:18:45.828 "results": [ 00:18:45.828 { 00:18:45.828 "job": "raid_bdev1", 00:18:45.828 "core_mask": "0x1", 00:18:45.828 "workload": "randrw", 00:18:45.828 "percentage": 50, 00:18:45.828 "status": "finished", 00:18:45.828 "queue_depth": 2, 00:18:45.828 "io_size": 3145728, 00:18:45.828 "runtime": 8.082792, 00:18:45.828 "iops": 84.74794353238336, 00:18:45.828 "mibps": 254.24383059715007, 00:18:45.828 "io_failed": 0, 00:18:45.828 "io_timeout": 0, 00:18:45.828 "avg_latency_us": 15922.32209157266, 00:18:45.828 "min_latency_us": 281.13454545454545, 00:18:45.828 "max_latency_us": 141081.13454545455 00:18:45.828 } 00:18:45.828 ], 00:18:45.828 "core_count": 1 00:18:45.828 } 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:45.828 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:45.829 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:45.829 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:45.829 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:45.829 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:46.087 /dev/nbd0 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.087 1+0 records in 00:18:46.087 1+0 records out 00:18:46.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373644 s, 11.0 MB/s 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.087 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:46.677 /dev/nbd1 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.677 1+0 records in 00:18:46.677 1+0 records out 00:18:46.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054796 s, 7.5 MB/s 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.677 09:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:46.677 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:46.677 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.677 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:46.677 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.677 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:46.677 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.677 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:46.936 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76629 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 76629 ']' 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 76629 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:47.195 09:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76629 00:18:47.452 killing process with pid 76629 00:18:47.452 Received shutdown signal, test time was about 9.627271 seconds 00:18:47.452 00:18:47.452 Latency(us) 00:18:47.452 [2024-11-06T09:11:23.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.452 [2024-11-06T09:11:23.987Z] =================================================================================================================== 00:18:47.452 [2024-11-06T09:11:23.987Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:47.453 09:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:47.453 09:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:47.453 09:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76629' 00:18:47.453 09:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 76629 00:18:47.453 09:11:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 76629 00:18:47.453 [2024-11-06 09:11:23.745637] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.453 [2024-11-06 09:11:23.971097] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:48.828 00:18:48.828 real 0m12.937s 00:18:48.828 user 0m16.909s 00:18:48.828 sys 0m1.382s 00:18:48.828 ************************************ 00:18:48.828 END TEST raid_rebuild_test_io 00:18:48.828 ************************************ 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.828 09:11:25 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:18:48.828 09:11:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:48.828 09:11:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:48.828 09:11:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.828 ************************************ 00:18:48.828 START TEST raid_rebuild_test_sb_io 00:18:48.828 ************************************ 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77013 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77013 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 77013 ']' 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:48.828 09:11:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.828 [2024-11-06 09:11:25.241205] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:18:48.828 [2024-11-06 09:11:25.241655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77013 ] 00:18:48.828 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:48.828 Zero copy mechanism will not be used. 00:18:49.087 [2024-11-06 09:11:25.417781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.087 [2024-11-06 09:11:25.553210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.346 [2024-11-06 09:11:25.761434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.346 [2024-11-06 09:11:25.761485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.913 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:49.913 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:18:49.913 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:49.913 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:49.913 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.913 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.913 BaseBdev1_malloc 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.914 [2024-11-06 09:11:26.301854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:49.914 [2024-11-06 09:11:26.302088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.914 [2024-11-06 09:11:26.302163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:49.914 [2024-11-06 09:11:26.302386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.914 [2024-11-06 09:11:26.305243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.914 [2024-11-06 09:11:26.305296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:49.914 BaseBdev1 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.914 BaseBdev2_malloc 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.914 [2024-11-06 09:11:26.358686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:49.914 [2024-11-06 09:11:26.358920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.914 [2024-11-06 09:11:26.358960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:49.914 [2024-11-06 09:11:26.358983] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.914 [2024-11-06 09:11:26.361802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.914 [2024-11-06 09:11:26.361864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:49.914 BaseBdev2 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.914 spare_malloc 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.914 spare_delay 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.914 [2024-11-06 09:11:26.428346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:49.914 [2024-11-06 09:11:26.428562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:49.914 [2024-11-06 09:11:26.428607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:49.914 [2024-11-06 09:11:26.428626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:49.914 [2024-11-06 09:11:26.431514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:49.914 [2024-11-06 09:11:26.431567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:49.914 spare 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.914 [2024-11-06 09:11:26.436501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.914 [2024-11-06 09:11:26.439097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:49.914 [2024-11-06 09:11:26.439328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:49.914 [2024-11-06 09:11:26.439354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:49.914 [2024-11-06 09:11:26.439690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:49.914 [2024-11-06 09:11:26.439931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:49.914 [2024-11-06 09:11:26.439947] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:49.914 [2024-11-06 09:11:26.440146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.914 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.171 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.171 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.171 "name": "raid_bdev1", 00:18:50.171 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:18:50.171 "strip_size_kb": 0, 00:18:50.171 "state": "online", 00:18:50.171 "raid_level": "raid1", 00:18:50.171 "superblock": true, 00:18:50.171 "num_base_bdevs": 2, 00:18:50.171 "num_base_bdevs_discovered": 2, 00:18:50.171 "num_base_bdevs_operational": 2, 00:18:50.171 "base_bdevs_list": [ 00:18:50.171 { 00:18:50.171 "name": "BaseBdev1", 00:18:50.171 "uuid": "7220d0a6-5eb5-5bae-a1d0-2797aefa64ea", 00:18:50.171 "is_configured": true, 00:18:50.171 "data_offset": 2048, 00:18:50.171 "data_size": 63488 00:18:50.171 }, 00:18:50.171 { 00:18:50.171 "name": "BaseBdev2", 00:18:50.171 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:18:50.171 "is_configured": true, 00:18:50.171 "data_offset": 2048, 00:18:50.171 "data_size": 63488 00:18:50.171 } 00:18:50.171 ] 00:18:50.171 }' 00:18:50.171 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.171 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.429 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:50.429 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:50.429 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.429 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.429 [2024-11-06 09:11:26.932971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:50.429 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.686 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:50.686 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.686 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.686 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:50.686 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.686 09:11:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.686 [2024-11-06 09:11:27.040685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.686 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.686 "name": "raid_bdev1", 00:18:50.686 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:18:50.686 "strip_size_kb": 0, 00:18:50.686 "state": "online", 00:18:50.686 "raid_level": "raid1", 00:18:50.686 "superblock": true, 00:18:50.686 "num_base_bdevs": 2, 00:18:50.687 "num_base_bdevs_discovered": 1, 00:18:50.687 "num_base_bdevs_operational": 1, 00:18:50.687 "base_bdevs_list": [ 00:18:50.687 { 00:18:50.687 "name": null, 00:18:50.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.687 "is_configured": false, 00:18:50.687 "data_offset": 0, 00:18:50.687 "data_size": 63488 00:18:50.687 }, 00:18:50.687 { 00:18:50.687 "name": "BaseBdev2", 00:18:50.687 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:18:50.687 "is_configured": true, 00:18:50.687 "data_offset": 2048, 00:18:50.687 "data_size": 63488 00:18:50.687 } 00:18:50.687 ] 00:18:50.687 }' 00:18:50.687 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.687 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.687 [2024-11-06 09:11:27.155382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:50.687 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:50.687 Zero copy mechanism will not be used. 00:18:50.687 Running I/O for 60 seconds... 00:18:51.254 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:51.254 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.254 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.254 [2024-11-06 09:11:27.542715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:51.254 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.254 09:11:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:51.254 [2024-11-06 09:11:27.632024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:51.254 [2024-11-06 09:11:27.634913] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.254 [2024-11-06 09:11:27.754867] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:51.254 [2024-11-06 09:11:27.755751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:51.512 [2024-11-06 09:11:27.891631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:51.770 185.00 IOPS, 555.00 MiB/s [2024-11-06T09:11:28.305Z] [2024-11-06 09:11:28.241701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:51.770 [2024-11-06 09:11:28.242695] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:52.029 [2024-11-06 09:11:28.464429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.287 "name": "raid_bdev1", 00:18:52.287 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:18:52.287 "strip_size_kb": 0, 00:18:52.287 "state": "online", 00:18:52.287 "raid_level": "raid1", 00:18:52.287 "superblock": true, 00:18:52.287 "num_base_bdevs": 2, 00:18:52.287 "num_base_bdevs_discovered": 2, 00:18:52.287 "num_base_bdevs_operational": 2, 00:18:52.287 "process": { 00:18:52.287 "type": "rebuild", 00:18:52.287 "target": "spare", 00:18:52.287 "progress": { 00:18:52.287 "blocks": 12288, 00:18:52.287 "percent": 19 00:18:52.287 } 00:18:52.287 }, 00:18:52.287 "base_bdevs_list": [ 00:18:52.287 { 00:18:52.287 "name": "spare", 00:18:52.287 "uuid": "8743f369-c738-52b6-8c80-4667cd3ae459", 00:18:52.287 "is_configured": true, 00:18:52.287 "data_offset": 2048, 00:18:52.287 "data_size": 63488 00:18:52.287 }, 00:18:52.287 { 00:18:52.287 "name": "BaseBdev2", 00:18:52.287 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:18:52.287 "is_configured": true, 00:18:52.287 "data_offset": 2048, 00:18:52.287 "data_size": 63488 00:18:52.287 } 00:18:52.287 ] 00:18:52.287 }' 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.287 [2024-11-06 09:11:28.748633] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:52.287 [2024-11-06 09:11:28.749286] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.287 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.287 [2024-11-06 09:11:28.771600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.546 [2024-11-06 09:11:28.870912] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:52.546 [2024-11-06 09:11:28.873732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.546 [2024-11-06 09:11:28.873978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:52.546 [2024-11-06 09:11:28.874016] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:52.546 [2024-11-06 09:11:28.909764] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.546 "name": "raid_bdev1", 00:18:52.546 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:18:52.546 "strip_size_kb": 0, 00:18:52.546 "state": "online", 00:18:52.546 "raid_level": "raid1", 00:18:52.546 "superblock": true, 00:18:52.546 "num_base_bdevs": 2, 00:18:52.546 "num_base_bdevs_discovered": 1, 00:18:52.546 "num_base_bdevs_operational": 1, 00:18:52.546 "base_bdevs_list": [ 00:18:52.546 { 00:18:52.546 "name": null, 00:18:52.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.546 "is_configured": false, 00:18:52.546 "data_offset": 0, 00:18:52.546 "data_size": 63488 00:18:52.546 }, 00:18:52.546 { 00:18:52.546 "name": "BaseBdev2", 00:18:52.546 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:18:52.546 "is_configured": true, 00:18:52.546 "data_offset": 2048, 00:18:52.546 "data_size": 63488 00:18:52.546 } 00:18:52.546 ] 00:18:52.546 }' 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.546 09:11:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:53.063 160.00 IOPS, 480.00 MiB/s [2024-11-06T09:11:29.598Z] 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.063 "name": "raid_bdev1", 00:18:53.063 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:18:53.063 "strip_size_kb": 0, 00:18:53.063 "state": "online", 00:18:53.063 "raid_level": "raid1", 00:18:53.063 "superblock": true, 00:18:53.063 "num_base_bdevs": 2, 00:18:53.063 "num_base_bdevs_discovered": 1, 00:18:53.063 "num_base_bdevs_operational": 1, 00:18:53.063 "base_bdevs_list": [ 00:18:53.063 { 00:18:53.063 "name": null, 00:18:53.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.063 "is_configured": false, 00:18:53.063 "data_offset": 0, 00:18:53.063 "data_size": 63488 00:18:53.063 }, 00:18:53.063 { 00:18:53.063 "name": "BaseBdev2", 00:18:53.063 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:18:53.063 "is_configured": true, 00:18:53.063 "data_offset": 2048, 00:18:53.063 "data_size": 63488 00:18:53.063 } 00:18:53.063 ] 00:18:53.063 }' 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.063 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:53.063 [2024-11-06 09:11:29.565461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:53.348 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.348 09:11:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:53.348 [2024-11-06 09:11:29.637372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:53.348 [2024-11-06 09:11:29.640040] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:53.348 [2024-11-06 09:11:29.777221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:53.348 [2024-11-06 09:11:29.778047] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:53.606 [2024-11-06 09:11:29.998473] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:53.606 [2024-11-06 09:11:29.998921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:53.864 164.33 IOPS, 493.00 MiB/s [2024-11-06T09:11:30.399Z] [2024-11-06 09:11:30.322600] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:54.122 [2024-11-06 09:11:30.441235] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:54.122 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.122 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.122 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.122 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.122 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.122 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.122 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.122 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.122 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:54.122 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.380 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.380 "name": "raid_bdev1", 00:18:54.380 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:18:54.380 "strip_size_kb": 0, 00:18:54.380 "state": "online", 00:18:54.380 "raid_level": "raid1", 00:18:54.380 "superblock": true, 00:18:54.380 "num_base_bdevs": 2, 00:18:54.380 "num_base_bdevs_discovered": 2, 00:18:54.380 "num_base_bdevs_operational": 2, 00:18:54.380 "process": { 00:18:54.380 "type": "rebuild", 00:18:54.380 "target": "spare", 00:18:54.380 "progress": { 00:18:54.380 "blocks": 10240, 00:18:54.380 "percent": 16 00:18:54.380 } 00:18:54.380 }, 00:18:54.380 "base_bdevs_list": [ 00:18:54.380 { 00:18:54.380 "name": "spare", 00:18:54.380 "uuid": "8743f369-c738-52b6-8c80-4667cd3ae459", 00:18:54.380 "is_configured": true, 00:18:54.380 "data_offset": 2048, 00:18:54.380 "data_size": 63488 00:18:54.380 }, 00:18:54.380 { 00:18:54.380 "name": "BaseBdev2", 00:18:54.380 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:18:54.380 "is_configured": true, 00:18:54.380 "data_offset": 2048, 00:18:54.380 "data_size": 63488 00:18:54.380 } 00:18:54.380 ] 00:18:54.380 }' 00:18:54.380 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.380 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.380 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.380 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.380 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:54.381 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=450 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.381 [2024-11-06 09:11:30.781006] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.381 "name": "raid_bdev1", 00:18:54.381 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:18:54.381 "strip_size_kb": 0, 00:18:54.381 "state": "online", 00:18:54.381 "raid_level": "raid1", 00:18:54.381 "superblock": true, 00:18:54.381 "num_base_bdevs": 2, 00:18:54.381 "num_base_bdevs_discovered": 2, 00:18:54.381 "num_base_bdevs_operational": 2, 00:18:54.381 "process": { 00:18:54.381 "type": "rebuild", 00:18:54.381 "target": "spare", 00:18:54.381 "progress": { 00:18:54.381 "blocks": 12288, 00:18:54.381 "percent": 19 00:18:54.381 } 00:18:54.381 }, 00:18:54.381 "base_bdevs_list": [ 00:18:54.381 { 00:18:54.381 "name": "spare", 00:18:54.381 "uuid": "8743f369-c738-52b6-8c80-4667cd3ae459", 00:18:54.381 "is_configured": true, 00:18:54.381 "data_offset": 2048, 00:18:54.381 "data_size": 63488 00:18:54.381 }, 00:18:54.381 { 00:18:54.381 "name": "BaseBdev2", 00:18:54.381 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:18:54.381 "is_configured": true, 00:18:54.381 "data_offset": 2048, 00:18:54.381 "data_size": 63488 00:18:54.381 } 00:18:54.381 ] 00:18:54.381 }' 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:54.381 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:54.381 [2024-11-06 09:11:30.905658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:54.638 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:54.638 09:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:54.896 137.50 IOPS, 412.50 MiB/s [2024-11-06T09:11:31.431Z] [2024-11-06 09:11:31.248943] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:54.896 [2024-11-06 09:11:31.376168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:55.160 [2024-11-06 09:11:31.639956] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:55.422 [2024-11-06 09:11:31.850544] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:55.422 [2024-11-06 09:11:31.851003] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:55.422 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:55.422 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.422 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.422 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.422 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.422 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.422 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.422 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.422 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:55.423 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.423 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.681 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.681 "name": "raid_bdev1", 00:18:55.681 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:18:55.681 "strip_size_kb": 0, 00:18:55.681 "state": "online", 00:18:55.681 "raid_level": "raid1", 00:18:55.681 "superblock": true, 00:18:55.681 "num_base_bdevs": 2, 00:18:55.681 "num_base_bdevs_discovered": 2, 00:18:55.681 "num_base_bdevs_operational": 2, 00:18:55.681 "process": { 00:18:55.681 "type": "rebuild", 00:18:55.681 "target": "spare", 00:18:55.681 "progress": { 00:18:55.681 "blocks": 28672, 00:18:55.681 "percent": 45 00:18:55.681 } 00:18:55.681 }, 00:18:55.681 "base_bdevs_list": [ 00:18:55.681 { 00:18:55.681 "name": "spare", 00:18:55.681 "uuid": "8743f369-c738-52b6-8c80-4667cd3ae459", 00:18:55.681 "is_configured": true, 00:18:55.681 "data_offset": 2048, 00:18:55.681 "data_size": 63488 00:18:55.681 }, 00:18:55.681 { 00:18:55.681 "name": "BaseBdev2", 00:18:55.681 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:18:55.681 "is_configured": true, 00:18:55.681 "data_offset": 2048, 00:18:55.681 "data_size": 63488 00:18:55.681 } 00:18:55.681 ] 00:18:55.681 }' 00:18:55.681 09:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.681 09:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.681 09:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.681 09:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.682 09:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:55.682 119.20 IOPS, 357.60 MiB/s [2024-11-06T09:11:32.217Z] [2024-11-06 09:11:32.207898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:18:55.939 [2024-11-06 09:11:32.417653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:55.939 [2024-11-06 09:11:32.418152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:56.505 [2024-11-06 09:11:32.733084] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:56.505 [2024-11-06 09:11:32.733979] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:56.762 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.763 "name": "raid_bdev1", 00:18:56.763 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:18:56.763 "strip_size_kb": 0, 00:18:56.763 "state": "online", 00:18:56.763 "raid_level": "raid1", 00:18:56.763 "superblock": true, 00:18:56.763 "num_base_bdevs": 2, 00:18:56.763 "num_base_bdevs_discovered": 2, 00:18:56.763 "num_base_bdevs_operational": 2, 00:18:56.763 "process": { 00:18:56.763 "type": "rebuild", 00:18:56.763 "target": "spare", 00:18:56.763 "progress": { 00:18:56.763 "blocks": 45056, 00:18:56.763 "percent": 70 00:18:56.763 } 00:18:56.763 }, 00:18:56.763 "base_bdevs_list": [ 00:18:56.763 { 00:18:56.763 "name": "spare", 00:18:56.763 "uuid": "8743f369-c738-52b6-8c80-4667cd3ae459", 00:18:56.763 "is_configured": true, 00:18:56.763 "data_offset": 2048, 00:18:56.763 "data_size": 63488 00:18:56.763 }, 00:18:56.763 { 00:18:56.763 "name": "BaseBdev2", 00:18:56.763 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:18:56.763 "is_configured": true, 00:18:56.763 "data_offset": 2048, 00:18:56.763 "data_size": 63488 00:18:56.763 } 00:18:56.763 ] 00:18:56.763 }' 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.763 105.00 IOPS, 315.00 MiB/s [2024-11-06T09:11:33.298Z] 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.763 09:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:57.021 [2024-11-06 09:11:33.442953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:18:57.588 [2024-11-06 09:11:33.896102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:18:57.846 [2024-11-06 09:11:34.126189] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:57.846 94.43 IOPS, 283.29 MiB/s [2024-11-06T09:11:34.381Z] [2024-11-06 09:11:34.226246] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:57.846 [2024-11-06 09:11:34.237456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.846 "name": "raid_bdev1", 00:18:57.846 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:18:57.846 "strip_size_kb": 0, 00:18:57.846 "state": "online", 00:18:57.846 "raid_level": "raid1", 00:18:57.846 "superblock": true, 00:18:57.846 "num_base_bdevs": 2, 00:18:57.846 "num_base_bdevs_discovered": 2, 00:18:57.846 "num_base_bdevs_operational": 2, 00:18:57.846 "base_bdevs_list": [ 00:18:57.846 { 00:18:57.846 "name": "spare", 00:18:57.846 "uuid": "8743f369-c738-52b6-8c80-4667cd3ae459", 00:18:57.846 "is_configured": true, 00:18:57.846 "data_offset": 2048, 00:18:57.846 "data_size": 63488 00:18:57.846 }, 00:18:57.846 { 00:18:57.846 "name": "BaseBdev2", 00:18:57.846 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:18:57.846 "is_configured": true, 00:18:57.846 "data_offset": 2048, 00:18:57.846 "data_size": 63488 00:18:57.846 } 00:18:57.846 ] 00:18:57.846 }' 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:57.846 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.103 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.104 "name": "raid_bdev1", 00:18:58.104 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:18:58.104 "strip_size_kb": 0, 00:18:58.104 "state": "online", 00:18:58.104 "raid_level": "raid1", 00:18:58.104 "superblock": true, 00:18:58.104 "num_base_bdevs": 2, 00:18:58.104 "num_base_bdevs_discovered": 2, 00:18:58.104 "num_base_bdevs_operational": 2, 00:18:58.104 "base_bdevs_list": [ 00:18:58.104 { 00:18:58.104 "name": "spare", 00:18:58.104 "uuid": "8743f369-c738-52b6-8c80-4667cd3ae459", 00:18:58.104 "is_configured": true, 00:18:58.104 "data_offset": 2048, 00:18:58.104 "data_size": 63488 00:18:58.104 }, 00:18:58.104 { 00:18:58.104 "name": "BaseBdev2", 00:18:58.104 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:18:58.104 "is_configured": true, 00:18:58.104 "data_offset": 2048, 00:18:58.104 "data_size": 63488 00:18:58.104 } 00:18:58.104 ] 00:18:58.104 }' 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.104 "name": "raid_bdev1", 00:18:58.104 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:18:58.104 "strip_size_kb": 0, 00:18:58.104 "state": "online", 00:18:58.104 "raid_level": "raid1", 00:18:58.104 "superblock": true, 00:18:58.104 "num_base_bdevs": 2, 00:18:58.104 "num_base_bdevs_discovered": 2, 00:18:58.104 "num_base_bdevs_operational": 2, 00:18:58.104 "base_bdevs_list": [ 00:18:58.104 { 00:18:58.104 "name": "spare", 00:18:58.104 "uuid": "8743f369-c738-52b6-8c80-4667cd3ae459", 00:18:58.104 "is_configured": true, 00:18:58.104 "data_offset": 2048, 00:18:58.104 "data_size": 63488 00:18:58.104 }, 00:18:58.104 { 00:18:58.104 "name": "BaseBdev2", 00:18:58.104 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:18:58.104 "is_configured": true, 00:18:58.104 "data_offset": 2048, 00:18:58.104 "data_size": 63488 00:18:58.104 } 00:18:58.104 ] 00:18:58.104 }' 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.104 09:11:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:58.670 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:58.670 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.670 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:58.670 [2024-11-06 09:11:35.020293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.670 [2024-11-06 09:11:35.020454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.670 00:18:58.670 Latency(us) 00:18:58.670 [2024-11-06T09:11:35.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.670 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:58.670 raid_bdev1 : 7.97 86.81 260.43 0.00 0.00 14354.03 286.72 119632.99 00:18:58.670 [2024-11-06T09:11:35.205Z] =================================================================================================================== 00:18:58.670 [2024-11-06T09:11:35.205Z] Total : 86.81 260.43 0.00 0.00 14354.03 286.72 119632.99 00:18:58.670 { 00:18:58.670 "results": [ 00:18:58.670 { 00:18:58.670 "job": "raid_bdev1", 00:18:58.670 "core_mask": "0x1", 00:18:58.670 "workload": "randrw", 00:18:58.670 "percentage": 50, 00:18:58.670 "status": "finished", 00:18:58.670 "queue_depth": 2, 00:18:58.670 "io_size": 3145728, 00:18:58.670 "runtime": 7.971574, 00:18:58.670 "iops": 86.80845213253994, 00:18:58.670 "mibps": 260.4253563976198, 00:18:58.670 "io_failed": 0, 00:18:58.670 "io_timeout": 0, 00:18:58.670 "avg_latency_us": 14354.027619548082, 00:18:58.670 "min_latency_us": 286.72, 00:18:58.670 "max_latency_us": 119632.98909090909 00:18:58.670 } 00:18:58.670 ], 00:18:58.670 "core_count": 1 00:18:58.670 } 00:18:58.670 [2024-11-06 09:11:35.149994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:58.670 [2024-11-06 09:11:35.150054] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.670 [2024-11-06 09:11:35.150165] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.670 [2024-11-06 09:11:35.150183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:58.670 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.670 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.670 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.670 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:58.670 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:58.670 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:58.930 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:59.188 /dev/nbd0 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:59.188 1+0 records in 00:18:59.188 1+0 records out 00:18:59.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353688 s, 11.6 MB/s 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:59.188 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:18:59.446 /dev/nbd1 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:59.446 1+0 records in 00:18:59.446 1+0 records out 00:18:59.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124913 s, 3.3 MB/s 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:18:59.446 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:59.447 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:59.447 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:18:59.447 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:59.447 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:59.447 09:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:59.705 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:59.705 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.705 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:59.705 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:59.705 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:59.705 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:59.705 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:59.963 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.220 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.221 [2024-11-06 09:11:36.566091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:00.221 [2024-11-06 09:11:36.566325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.221 [2024-11-06 09:11:36.566369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:00.221 [2024-11-06 09:11:36.566386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.221 [2024-11-06 09:11:36.569247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.221 [2024-11-06 09:11:36.569292] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:00.221 [2024-11-06 09:11:36.569405] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:00.221 [2024-11-06 09:11:36.569465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:00.221 [2024-11-06 09:11:36.569648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.221 spare 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.221 [2024-11-06 09:11:36.669779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:00.221 [2024-11-06 09:11:36.670042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:00.221 [2024-11-06 09:11:36.670529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:19:00.221 [2024-11-06 09:11:36.670934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:00.221 [2024-11-06 09:11:36.670959] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:00.221 [2024-11-06 09:11:36.671220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.221 "name": "raid_bdev1", 00:19:00.221 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:19:00.221 "strip_size_kb": 0, 00:19:00.221 "state": "online", 00:19:00.221 "raid_level": "raid1", 00:19:00.221 "superblock": true, 00:19:00.221 "num_base_bdevs": 2, 00:19:00.221 "num_base_bdevs_discovered": 2, 00:19:00.221 "num_base_bdevs_operational": 2, 00:19:00.221 "base_bdevs_list": [ 00:19:00.221 { 00:19:00.221 "name": "spare", 00:19:00.221 "uuid": "8743f369-c738-52b6-8c80-4667cd3ae459", 00:19:00.221 "is_configured": true, 00:19:00.221 "data_offset": 2048, 00:19:00.221 "data_size": 63488 00:19:00.221 }, 00:19:00.221 { 00:19:00.221 "name": "BaseBdev2", 00:19:00.221 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:19:00.221 "is_configured": true, 00:19:00.221 "data_offset": 2048, 00:19:00.221 "data_size": 63488 00:19:00.221 } 00:19:00.221 ] 00:19:00.221 }' 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.221 09:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.812 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:00.812 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.812 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:00.812 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:00.812 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.812 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.812 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.812 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:00.812 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.812 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.812 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.812 "name": "raid_bdev1", 00:19:00.812 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:19:00.812 "strip_size_kb": 0, 00:19:00.812 "state": "online", 00:19:00.812 "raid_level": "raid1", 00:19:00.812 "superblock": true, 00:19:00.812 "num_base_bdevs": 2, 00:19:00.812 "num_base_bdevs_discovered": 2, 00:19:00.812 "num_base_bdevs_operational": 2, 00:19:00.812 "base_bdevs_list": [ 00:19:00.812 { 00:19:00.812 "name": "spare", 00:19:00.812 "uuid": "8743f369-c738-52b6-8c80-4667cd3ae459", 00:19:00.812 "is_configured": true, 00:19:00.812 "data_offset": 2048, 00:19:00.812 "data_size": 63488 00:19:00.812 }, 00:19:00.812 { 00:19:00.812 "name": "BaseBdev2", 00:19:00.813 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:19:00.813 "is_configured": true, 00:19:00.813 "data_offset": 2048, 00:19:00.813 "data_size": 63488 00:19:00.813 } 00:19:00.813 ] 00:19:00.813 }' 00:19:00.813 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.813 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:00.813 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.071 [2024-11-06 09:11:37.419504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.071 "name": "raid_bdev1", 00:19:01.071 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:19:01.071 "strip_size_kb": 0, 00:19:01.071 "state": "online", 00:19:01.071 "raid_level": "raid1", 00:19:01.071 "superblock": true, 00:19:01.071 "num_base_bdevs": 2, 00:19:01.071 "num_base_bdevs_discovered": 1, 00:19:01.071 "num_base_bdevs_operational": 1, 00:19:01.071 "base_bdevs_list": [ 00:19:01.071 { 00:19:01.071 "name": null, 00:19:01.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.071 "is_configured": false, 00:19:01.071 "data_offset": 0, 00:19:01.071 "data_size": 63488 00:19:01.071 }, 00:19:01.071 { 00:19:01.071 "name": "BaseBdev2", 00:19:01.071 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:19:01.071 "is_configured": true, 00:19:01.071 "data_offset": 2048, 00:19:01.071 "data_size": 63488 00:19:01.071 } 00:19:01.071 ] 00:19:01.071 }' 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.071 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.637 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:01.637 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.637 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:01.637 [2024-11-06 09:11:37.971725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.637 [2024-11-06 09:11:37.972150] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:01.637 [2024-11-06 09:11:37.972186] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:01.637 [2024-11-06 09:11:37.972241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.637 [2024-11-06 09:11:37.988160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:19:01.637 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.637 09:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:01.637 [2024-11-06 09:11:37.990739] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:02.572 09:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:02.572 09:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:02.572 09:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:02.572 09:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:02.572 09:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:02.572 09:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.572 09:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.572 09:11:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.572 09:11:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:02.572 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.572 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:02.572 "name": "raid_bdev1", 00:19:02.572 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:19:02.572 "strip_size_kb": 0, 00:19:02.572 "state": "online", 00:19:02.572 "raid_level": "raid1", 00:19:02.572 "superblock": true, 00:19:02.572 "num_base_bdevs": 2, 00:19:02.572 "num_base_bdevs_discovered": 2, 00:19:02.572 "num_base_bdevs_operational": 2, 00:19:02.572 "process": { 00:19:02.572 "type": "rebuild", 00:19:02.572 "target": "spare", 00:19:02.572 "progress": { 00:19:02.572 "blocks": 20480, 00:19:02.572 "percent": 32 00:19:02.572 } 00:19:02.572 }, 00:19:02.572 "base_bdevs_list": [ 00:19:02.572 { 00:19:02.572 "name": "spare", 00:19:02.572 "uuid": "8743f369-c738-52b6-8c80-4667cd3ae459", 00:19:02.572 "is_configured": true, 00:19:02.572 "data_offset": 2048, 00:19:02.572 "data_size": 63488 00:19:02.572 }, 00:19:02.572 { 00:19:02.572 "name": "BaseBdev2", 00:19:02.572 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:19:02.572 "is_configured": true, 00:19:02.572 "data_offset": 2048, 00:19:02.572 "data_size": 63488 00:19:02.572 } 00:19:02.572 ] 00:19:02.572 }' 00:19:02.572 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:02.572 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:02.572 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:02.830 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:02.830 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:02.830 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.830 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:02.830 [2024-11-06 09:11:39.167990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:02.830 [2024-11-06 09:11:39.199725] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:02.830 [2024-11-06 09:11:39.200149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.830 [2024-11-06 09:11:39.200179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:02.830 [2024-11-06 09:11:39.200195] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:02.830 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.830 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:02.830 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.830 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.830 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.830 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.830 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:02.830 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.831 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.831 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.831 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.831 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.831 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.831 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:02.831 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.831 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.831 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.831 "name": "raid_bdev1", 00:19:02.831 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:19:02.831 "strip_size_kb": 0, 00:19:02.831 "state": "online", 00:19:02.831 "raid_level": "raid1", 00:19:02.831 "superblock": true, 00:19:02.831 "num_base_bdevs": 2, 00:19:02.831 "num_base_bdevs_discovered": 1, 00:19:02.831 "num_base_bdevs_operational": 1, 00:19:02.831 "base_bdevs_list": [ 00:19:02.831 { 00:19:02.831 "name": null, 00:19:02.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.831 "is_configured": false, 00:19:02.831 "data_offset": 0, 00:19:02.831 "data_size": 63488 00:19:02.831 }, 00:19:02.831 { 00:19:02.831 "name": "BaseBdev2", 00:19:02.831 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:19:02.831 "is_configured": true, 00:19:02.831 "data_offset": 2048, 00:19:02.831 "data_size": 63488 00:19:02.831 } 00:19:02.831 ] 00:19:02.831 }' 00:19:02.831 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.831 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.397 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:03.397 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.397 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:03.397 [2024-11-06 09:11:39.739349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:03.397 [2024-11-06 09:11:39.739452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.397 [2024-11-06 09:11:39.739485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:03.397 [2024-11-06 09:11:39.739503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.397 [2024-11-06 09:11:39.740118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.397 [2024-11-06 09:11:39.740156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:03.397 [2024-11-06 09:11:39.740281] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:03.397 [2024-11-06 09:11:39.740306] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:03.397 [2024-11-06 09:11:39.740319] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:03.397 [2024-11-06 09:11:39.740359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.397 spare 00:19:03.397 [2024-11-06 09:11:39.756514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:19:03.397 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.397 09:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:03.397 [2024-11-06 09:11:39.759142] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:04.330 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:04.330 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:04.330 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:04.330 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:04.330 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:04.330 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.330 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.330 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.330 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:04.330 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.330 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:04.330 "name": "raid_bdev1", 00:19:04.330 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:19:04.330 "strip_size_kb": 0, 00:19:04.330 "state": "online", 00:19:04.330 "raid_level": "raid1", 00:19:04.330 "superblock": true, 00:19:04.330 "num_base_bdevs": 2, 00:19:04.330 "num_base_bdevs_discovered": 2, 00:19:04.331 "num_base_bdevs_operational": 2, 00:19:04.331 "process": { 00:19:04.331 "type": "rebuild", 00:19:04.331 "target": "spare", 00:19:04.331 "progress": { 00:19:04.331 "blocks": 20480, 00:19:04.331 "percent": 32 00:19:04.331 } 00:19:04.331 }, 00:19:04.331 "base_bdevs_list": [ 00:19:04.331 { 00:19:04.331 "name": "spare", 00:19:04.331 "uuid": "8743f369-c738-52b6-8c80-4667cd3ae459", 00:19:04.331 "is_configured": true, 00:19:04.331 "data_offset": 2048, 00:19:04.331 "data_size": 63488 00:19:04.331 }, 00:19:04.331 { 00:19:04.331 "name": "BaseBdev2", 00:19:04.331 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:19:04.331 "is_configured": true, 00:19:04.331 "data_offset": 2048, 00:19:04.331 "data_size": 63488 00:19:04.331 } 00:19:04.331 ] 00:19:04.331 }' 00:19:04.331 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:04.589 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:04.589 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:04.589 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:04.589 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:04.589 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.589 09:11:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:04.589 [2024-11-06 09:11:40.912912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:04.589 [2024-11-06 09:11:40.968088] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:04.589 [2024-11-06 09:11:40.968204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.589 [2024-11-06 09:11:40.968234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:04.589 [2024-11-06 09:11:40.968246] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.589 "name": "raid_bdev1", 00:19:04.589 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:19:04.589 "strip_size_kb": 0, 00:19:04.589 "state": "online", 00:19:04.589 "raid_level": "raid1", 00:19:04.589 "superblock": true, 00:19:04.589 "num_base_bdevs": 2, 00:19:04.589 "num_base_bdevs_discovered": 1, 00:19:04.589 "num_base_bdevs_operational": 1, 00:19:04.589 "base_bdevs_list": [ 00:19:04.589 { 00:19:04.589 "name": null, 00:19:04.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.589 "is_configured": false, 00:19:04.589 "data_offset": 0, 00:19:04.589 "data_size": 63488 00:19:04.589 }, 00:19:04.589 { 00:19:04.589 "name": "BaseBdev2", 00:19:04.589 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:19:04.589 "is_configured": true, 00:19:04.589 "data_offset": 2048, 00:19:04.589 "data_size": 63488 00:19:04.589 } 00:19:04.589 ] 00:19:04.589 }' 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.589 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.156 "name": "raid_bdev1", 00:19:05.156 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:19:05.156 "strip_size_kb": 0, 00:19:05.156 "state": "online", 00:19:05.156 "raid_level": "raid1", 00:19:05.156 "superblock": true, 00:19:05.156 "num_base_bdevs": 2, 00:19:05.156 "num_base_bdevs_discovered": 1, 00:19:05.156 "num_base_bdevs_operational": 1, 00:19:05.156 "base_bdevs_list": [ 00:19:05.156 { 00:19:05.156 "name": null, 00:19:05.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.156 "is_configured": false, 00:19:05.156 "data_offset": 0, 00:19:05.156 "data_size": 63488 00:19:05.156 }, 00:19:05.156 { 00:19:05.156 "name": "BaseBdev2", 00:19:05.156 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:19:05.156 "is_configured": true, 00:19:05.156 "data_offset": 2048, 00:19:05.156 "data_size": 63488 00:19:05.156 } 00:19:05.156 ] 00:19:05.156 }' 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.156 [2024-11-06 09:11:41.683258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:05.156 [2024-11-06 09:11:41.683468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.156 [2024-11-06 09:11:41.683649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:05.156 [2024-11-06 09:11:41.683676] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.156 [2024-11-06 09:11:41.684261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.156 [2024-11-06 09:11:41.684298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:05.156 [2024-11-06 09:11:41.684406] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:05.156 [2024-11-06 09:11:41.684427] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:05.156 [2024-11-06 09:11:41.684443] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:05.156 [2024-11-06 09:11:41.684457] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:05.156 BaseBdev1 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.156 09:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.529 "name": "raid_bdev1", 00:19:06.529 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:19:06.529 "strip_size_kb": 0, 00:19:06.529 "state": "online", 00:19:06.529 "raid_level": "raid1", 00:19:06.529 "superblock": true, 00:19:06.529 "num_base_bdevs": 2, 00:19:06.529 "num_base_bdevs_discovered": 1, 00:19:06.529 "num_base_bdevs_operational": 1, 00:19:06.529 "base_bdevs_list": [ 00:19:06.529 { 00:19:06.529 "name": null, 00:19:06.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.529 "is_configured": false, 00:19:06.529 "data_offset": 0, 00:19:06.529 "data_size": 63488 00:19:06.529 }, 00:19:06.529 { 00:19:06.529 "name": "BaseBdev2", 00:19:06.529 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:19:06.529 "is_configured": true, 00:19:06.529 "data_offset": 2048, 00:19:06.529 "data_size": 63488 00:19:06.529 } 00:19:06.529 ] 00:19:06.529 }' 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.529 09:11:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:06.788 "name": "raid_bdev1", 00:19:06.788 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:19:06.788 "strip_size_kb": 0, 00:19:06.788 "state": "online", 00:19:06.788 "raid_level": "raid1", 00:19:06.788 "superblock": true, 00:19:06.788 "num_base_bdevs": 2, 00:19:06.788 "num_base_bdevs_discovered": 1, 00:19:06.788 "num_base_bdevs_operational": 1, 00:19:06.788 "base_bdevs_list": [ 00:19:06.788 { 00:19:06.788 "name": null, 00:19:06.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.788 "is_configured": false, 00:19:06.788 "data_offset": 0, 00:19:06.788 "data_size": 63488 00:19:06.788 }, 00:19:06.788 { 00:19:06.788 "name": "BaseBdev2", 00:19:06.788 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:19:06.788 "is_configured": true, 00:19:06.788 "data_offset": 2048, 00:19:06.788 "data_size": 63488 00:19:06.788 } 00:19:06.788 ] 00:19:06.788 }' 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:06.788 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:07.046 [2024-11-06 09:11:43.336005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.046 [2024-11-06 09:11:43.336349] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:07.046 [2024-11-06 09:11:43.336388] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:07.046 request: 00:19:07.046 { 00:19:07.046 "base_bdev": "BaseBdev1", 00:19:07.046 "raid_bdev": "raid_bdev1", 00:19:07.046 "method": "bdev_raid_add_base_bdev", 00:19:07.046 "req_id": 1 00:19:07.046 } 00:19:07.046 Got JSON-RPC error response 00:19:07.046 response: 00:19:07.046 { 00:19:07.046 "code": -22, 00:19:07.046 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:07.046 } 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.046 09:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.979 "name": "raid_bdev1", 00:19:07.979 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:19:07.979 "strip_size_kb": 0, 00:19:07.979 "state": "online", 00:19:07.979 "raid_level": "raid1", 00:19:07.979 "superblock": true, 00:19:07.979 "num_base_bdevs": 2, 00:19:07.979 "num_base_bdevs_discovered": 1, 00:19:07.979 "num_base_bdevs_operational": 1, 00:19:07.979 "base_bdevs_list": [ 00:19:07.979 { 00:19:07.979 "name": null, 00:19:07.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.979 "is_configured": false, 00:19:07.979 "data_offset": 0, 00:19:07.979 "data_size": 63488 00:19:07.979 }, 00:19:07.979 { 00:19:07.979 "name": "BaseBdev2", 00:19:07.979 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:19:07.979 "is_configured": true, 00:19:07.979 "data_offset": 2048, 00:19:07.979 "data_size": 63488 00:19:07.979 } 00:19:07.979 ] 00:19:07.979 }' 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.979 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.546 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.546 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.546 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.546 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.546 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.546 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.546 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.546 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.546 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:08.546 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.546 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.546 "name": "raid_bdev1", 00:19:08.546 "uuid": "beba6e90-3e36-42a0-b40d-c8c65dd57cf9", 00:19:08.546 "strip_size_kb": 0, 00:19:08.546 "state": "online", 00:19:08.546 "raid_level": "raid1", 00:19:08.546 "superblock": true, 00:19:08.546 "num_base_bdevs": 2, 00:19:08.546 "num_base_bdevs_discovered": 1, 00:19:08.546 "num_base_bdevs_operational": 1, 00:19:08.546 "base_bdevs_list": [ 00:19:08.546 { 00:19:08.546 "name": null, 00:19:08.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.547 "is_configured": false, 00:19:08.547 "data_offset": 0, 00:19:08.547 "data_size": 63488 00:19:08.547 }, 00:19:08.547 { 00:19:08.547 "name": "BaseBdev2", 00:19:08.547 "uuid": "a135703b-30f7-530b-83ac-0c944ea2a241", 00:19:08.547 "is_configured": true, 00:19:08.547 "data_offset": 2048, 00:19:08.547 "data_size": 63488 00:19:08.547 } 00:19:08.547 ] 00:19:08.547 }' 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77013 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 77013 ']' 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 77013 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77013 00:19:08.547 killing process with pid 77013 00:19:08.547 Received shutdown signal, test time was about 17.829319 seconds 00:19:08.547 00:19:08.547 Latency(us) 00:19:08.547 [2024-11-06T09:11:45.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.547 [2024-11-06T09:11:45.082Z] =================================================================================================================== 00:19:08.547 [2024-11-06T09:11:45.082Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77013' 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 77013 00:19:08.547 [2024-11-06 09:11:44.987466] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.547 09:11:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 77013 00:19:08.547 [2024-11-06 09:11:44.987630] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.547 [2024-11-06 09:11:44.987699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.547 [2024-11-06 09:11:44.987718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:08.805 [2024-11-06 09:11:45.193925] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:19:10.183 00:19:10.183 real 0m21.166s 00:19:10.183 user 0m28.720s 00:19:10.183 sys 0m1.888s 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:10.183 ************************************ 00:19:10.183 END TEST raid_rebuild_test_sb_io 00:19:10.183 ************************************ 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:19:10.183 09:11:46 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:19:10.183 09:11:46 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:19:10.183 09:11:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:10.183 09:11:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:10.183 09:11:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.183 ************************************ 00:19:10.183 START TEST raid_rebuild_test 00:19:10.183 ************************************ 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:10.183 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77708 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77708 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 77708 ']' 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:10.184 09:11:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:10.184 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:10.184 Zero copy mechanism will not be used. 00:19:10.184 [2024-11-06 09:11:46.424528] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:19:10.184 [2024-11-06 09:11:46.424703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77708 ] 00:19:10.184 [2024-11-06 09:11:46.600913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.457 [2024-11-06 09:11:46.744014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.457 [2024-11-06 09:11:46.973569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:10.457 [2024-11-06 09:11:46.973664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.025 BaseBdev1_malloc 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.025 [2024-11-06 09:11:47.477472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:11.025 [2024-11-06 09:11:47.477690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.025 [2024-11-06 09:11:47.477768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:11.025 [2024-11-06 09:11:47.478047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.025 [2024-11-06 09:11:47.480777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.025 [2024-11-06 09:11:47.480840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:11.025 BaseBdev1 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.025 BaseBdev2_malloc 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.025 [2024-11-06 09:11:47.525435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:11.025 [2024-11-06 09:11:47.525512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.025 [2024-11-06 09:11:47.525541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:11.025 [2024-11-06 09:11:47.525563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.025 [2024-11-06 09:11:47.528329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.025 [2024-11-06 09:11:47.528379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:11.025 BaseBdev2 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.025 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.298 BaseBdev3_malloc 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.298 [2024-11-06 09:11:47.587994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:11.298 [2024-11-06 09:11:47.588188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.298 [2024-11-06 09:11:47.588232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:11.298 [2024-11-06 09:11:47.588253] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.298 [2024-11-06 09:11:47.590914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.298 [2024-11-06 09:11:47.590964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:11.298 BaseBdev3 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.298 BaseBdev4_malloc 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.298 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.298 [2024-11-06 09:11:47.637695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:11.299 [2024-11-06 09:11:47.637763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.299 [2024-11-06 09:11:47.637798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:11.299 [2024-11-06 09:11:47.637841] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.299 [2024-11-06 09:11:47.641322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.299 [2024-11-06 09:11:47.641392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:11.299 BaseBdev4 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.299 spare_malloc 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.299 spare_delay 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.299 [2024-11-06 09:11:47.700066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:11.299 [2024-11-06 09:11:47.700268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:11.299 [2024-11-06 09:11:47.700461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:11.299 [2024-11-06 09:11:47.700495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:11.299 [2024-11-06 09:11:47.703294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:11.299 [2024-11-06 09:11:47.703339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:11.299 spare 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.299 [2024-11-06 09:11:47.708199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.299 [2024-11-06 09:11:47.710713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:11.299 [2024-11-06 09:11:47.710943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:11.299 [2024-11-06 09:11:47.711150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:11.299 [2024-11-06 09:11:47.711372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:11.299 [2024-11-06 09:11:47.711405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:11.299 [2024-11-06 09:11:47.711738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:11.299 [2024-11-06 09:11:47.711985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:11.299 [2024-11-06 09:11:47.712006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:11.299 [2024-11-06 09:11:47.712254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.299 "name": "raid_bdev1", 00:19:11.299 "uuid": "b624605f-f21f-46bc-ba5b-c5ad5b5fe154", 00:19:11.299 "strip_size_kb": 0, 00:19:11.299 "state": "online", 00:19:11.299 "raid_level": "raid1", 00:19:11.299 "superblock": false, 00:19:11.299 "num_base_bdevs": 4, 00:19:11.299 "num_base_bdevs_discovered": 4, 00:19:11.299 "num_base_bdevs_operational": 4, 00:19:11.299 "base_bdevs_list": [ 00:19:11.299 { 00:19:11.299 "name": "BaseBdev1", 00:19:11.299 "uuid": "bbd61bbd-83c1-5173-8b10-4bfd2a716e04", 00:19:11.299 "is_configured": true, 00:19:11.299 "data_offset": 0, 00:19:11.299 "data_size": 65536 00:19:11.299 }, 00:19:11.299 { 00:19:11.299 "name": "BaseBdev2", 00:19:11.299 "uuid": "9165e752-a152-5a7d-81ef-312884e8eaca", 00:19:11.299 "is_configured": true, 00:19:11.299 "data_offset": 0, 00:19:11.299 "data_size": 65536 00:19:11.299 }, 00:19:11.299 { 00:19:11.299 "name": "BaseBdev3", 00:19:11.299 "uuid": "ea521155-a452-5615-a23c-a1fe97c89475", 00:19:11.299 "is_configured": true, 00:19:11.299 "data_offset": 0, 00:19:11.299 "data_size": 65536 00:19:11.299 }, 00:19:11.299 { 00:19:11.299 "name": "BaseBdev4", 00:19:11.299 "uuid": "e06eb281-f008-59d5-9b06-bdbc0de20b68", 00:19:11.299 "is_configured": true, 00:19:11.299 "data_offset": 0, 00:19:11.299 "data_size": 65536 00:19:11.299 } 00:19:11.299 ] 00:19:11.299 }' 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.299 09:11:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.865 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:11.865 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.865 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.865 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:11.865 [2024-11-06 09:11:48.172760] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:11.865 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.865 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:19:11.865 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.865 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.865 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:11.865 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:11.866 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:12.123 [2024-11-06 09:11:48.564519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:12.123 /dev/nbd0 00:19:12.123 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:12.123 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:12.123 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:12.124 1+0 records in 00:19:12.124 1+0 records out 00:19:12.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571707 s, 7.2 MB/s 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:12.124 09:11:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:20.248 65536+0 records in 00:19:20.248 65536+0 records out 00:19:20.248 33554432 bytes (34 MB, 32 MiB) copied, 8.08433 s, 4.2 MB/s 00:19:20.248 09:11:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:20.248 09:11:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:20.248 09:11:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:20.248 09:11:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:20.248 09:11:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:20.248 09:11:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:20.248 09:11:56 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:20.507 [2024-11-06 09:11:57.015942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.507 [2024-11-06 09:11:57.029385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.507 09:11:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.764 09:11:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.764 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.764 "name": "raid_bdev1", 00:19:20.764 "uuid": "b624605f-f21f-46bc-ba5b-c5ad5b5fe154", 00:19:20.764 "strip_size_kb": 0, 00:19:20.764 "state": "online", 00:19:20.764 "raid_level": "raid1", 00:19:20.764 "superblock": false, 00:19:20.764 "num_base_bdevs": 4, 00:19:20.764 "num_base_bdevs_discovered": 3, 00:19:20.764 "num_base_bdevs_operational": 3, 00:19:20.764 "base_bdevs_list": [ 00:19:20.764 { 00:19:20.764 "name": null, 00:19:20.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.764 "is_configured": false, 00:19:20.764 "data_offset": 0, 00:19:20.764 "data_size": 65536 00:19:20.764 }, 00:19:20.764 { 00:19:20.764 "name": "BaseBdev2", 00:19:20.764 "uuid": "9165e752-a152-5a7d-81ef-312884e8eaca", 00:19:20.764 "is_configured": true, 00:19:20.764 "data_offset": 0, 00:19:20.764 "data_size": 65536 00:19:20.764 }, 00:19:20.764 { 00:19:20.764 "name": "BaseBdev3", 00:19:20.764 "uuid": "ea521155-a452-5615-a23c-a1fe97c89475", 00:19:20.764 "is_configured": true, 00:19:20.764 "data_offset": 0, 00:19:20.764 "data_size": 65536 00:19:20.764 }, 00:19:20.764 { 00:19:20.764 "name": "BaseBdev4", 00:19:20.764 "uuid": "e06eb281-f008-59d5-9b06-bdbc0de20b68", 00:19:20.764 "is_configured": true, 00:19:20.764 "data_offset": 0, 00:19:20.764 "data_size": 65536 00:19:20.764 } 00:19:20.764 ] 00:19:20.764 }' 00:19:20.764 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.764 09:11:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.022 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:21.022 09:11:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.022 09:11:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.022 [2024-11-06 09:11:57.533537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:21.022 [2024-11-06 09:11:57.547518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:19:21.023 09:11:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.023 09:11:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:21.023 [2024-11-06 09:11:57.549991] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.397 "name": "raid_bdev1", 00:19:22.397 "uuid": "b624605f-f21f-46bc-ba5b-c5ad5b5fe154", 00:19:22.397 "strip_size_kb": 0, 00:19:22.397 "state": "online", 00:19:22.397 "raid_level": "raid1", 00:19:22.397 "superblock": false, 00:19:22.397 "num_base_bdevs": 4, 00:19:22.397 "num_base_bdevs_discovered": 4, 00:19:22.397 "num_base_bdevs_operational": 4, 00:19:22.397 "process": { 00:19:22.397 "type": "rebuild", 00:19:22.397 "target": "spare", 00:19:22.397 "progress": { 00:19:22.397 "blocks": 20480, 00:19:22.397 "percent": 31 00:19:22.397 } 00:19:22.397 }, 00:19:22.397 "base_bdevs_list": [ 00:19:22.397 { 00:19:22.397 "name": "spare", 00:19:22.397 "uuid": "d593b24d-e7b3-59f0-8dbe-c8208919a15f", 00:19:22.397 "is_configured": true, 00:19:22.397 "data_offset": 0, 00:19:22.397 "data_size": 65536 00:19:22.397 }, 00:19:22.397 { 00:19:22.397 "name": "BaseBdev2", 00:19:22.397 "uuid": "9165e752-a152-5a7d-81ef-312884e8eaca", 00:19:22.397 "is_configured": true, 00:19:22.397 "data_offset": 0, 00:19:22.397 "data_size": 65536 00:19:22.397 }, 00:19:22.397 { 00:19:22.397 "name": "BaseBdev3", 00:19:22.397 "uuid": "ea521155-a452-5615-a23c-a1fe97c89475", 00:19:22.397 "is_configured": true, 00:19:22.397 "data_offset": 0, 00:19:22.397 "data_size": 65536 00:19:22.397 }, 00:19:22.397 { 00:19:22.397 "name": "BaseBdev4", 00:19:22.397 "uuid": "e06eb281-f008-59d5-9b06-bdbc0de20b68", 00:19:22.397 "is_configured": true, 00:19:22.397 "data_offset": 0, 00:19:22.397 "data_size": 65536 00:19:22.397 } 00:19:22.397 ] 00:19:22.397 }' 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.397 [2024-11-06 09:11:58.718996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.397 [2024-11-06 09:11:58.758950] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:22.397 [2024-11-06 09:11:58.759188] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.397 [2024-11-06 09:11:58.759220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:22.397 [2024-11-06 09:11:58.759237] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.397 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.397 "name": "raid_bdev1", 00:19:22.397 "uuid": "b624605f-f21f-46bc-ba5b-c5ad5b5fe154", 00:19:22.397 "strip_size_kb": 0, 00:19:22.397 "state": "online", 00:19:22.397 "raid_level": "raid1", 00:19:22.397 "superblock": false, 00:19:22.397 "num_base_bdevs": 4, 00:19:22.397 "num_base_bdevs_discovered": 3, 00:19:22.397 "num_base_bdevs_operational": 3, 00:19:22.397 "base_bdevs_list": [ 00:19:22.397 { 00:19:22.397 "name": null, 00:19:22.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.397 "is_configured": false, 00:19:22.397 "data_offset": 0, 00:19:22.397 "data_size": 65536 00:19:22.397 }, 00:19:22.397 { 00:19:22.397 "name": "BaseBdev2", 00:19:22.398 "uuid": "9165e752-a152-5a7d-81ef-312884e8eaca", 00:19:22.398 "is_configured": true, 00:19:22.398 "data_offset": 0, 00:19:22.398 "data_size": 65536 00:19:22.398 }, 00:19:22.398 { 00:19:22.398 "name": "BaseBdev3", 00:19:22.398 "uuid": "ea521155-a452-5615-a23c-a1fe97c89475", 00:19:22.398 "is_configured": true, 00:19:22.398 "data_offset": 0, 00:19:22.398 "data_size": 65536 00:19:22.398 }, 00:19:22.398 { 00:19:22.398 "name": "BaseBdev4", 00:19:22.398 "uuid": "e06eb281-f008-59d5-9b06-bdbc0de20b68", 00:19:22.398 "is_configured": true, 00:19:22.398 "data_offset": 0, 00:19:22.398 "data_size": 65536 00:19:22.398 } 00:19:22.398 ] 00:19:22.398 }' 00:19:22.398 09:11:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.398 09:11:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.967 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:22.967 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.967 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:22.967 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:22.967 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.967 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.967 09:11:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.967 09:11:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.967 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.967 09:11:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.967 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.967 "name": "raid_bdev1", 00:19:22.967 "uuid": "b624605f-f21f-46bc-ba5b-c5ad5b5fe154", 00:19:22.967 "strip_size_kb": 0, 00:19:22.968 "state": "online", 00:19:22.968 "raid_level": "raid1", 00:19:22.968 "superblock": false, 00:19:22.968 "num_base_bdevs": 4, 00:19:22.968 "num_base_bdevs_discovered": 3, 00:19:22.968 "num_base_bdevs_operational": 3, 00:19:22.968 "base_bdevs_list": [ 00:19:22.968 { 00:19:22.968 "name": null, 00:19:22.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.968 "is_configured": false, 00:19:22.968 "data_offset": 0, 00:19:22.968 "data_size": 65536 00:19:22.968 }, 00:19:22.968 { 00:19:22.968 "name": "BaseBdev2", 00:19:22.968 "uuid": "9165e752-a152-5a7d-81ef-312884e8eaca", 00:19:22.968 "is_configured": true, 00:19:22.968 "data_offset": 0, 00:19:22.968 "data_size": 65536 00:19:22.968 }, 00:19:22.968 { 00:19:22.968 "name": "BaseBdev3", 00:19:22.968 "uuid": "ea521155-a452-5615-a23c-a1fe97c89475", 00:19:22.968 "is_configured": true, 00:19:22.968 "data_offset": 0, 00:19:22.968 "data_size": 65536 00:19:22.968 }, 00:19:22.968 { 00:19:22.968 "name": "BaseBdev4", 00:19:22.968 "uuid": "e06eb281-f008-59d5-9b06-bdbc0de20b68", 00:19:22.968 "is_configured": true, 00:19:22.968 "data_offset": 0, 00:19:22.968 "data_size": 65536 00:19:22.968 } 00:19:22.968 ] 00:19:22.968 }' 00:19:22.968 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.968 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.968 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.968 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:22.968 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:22.968 09:11:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.968 09:11:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.968 [2024-11-06 09:11:59.455068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:22.968 [2024-11-06 09:11:59.468498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:19:22.968 09:11:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.968 09:11:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:22.968 [2024-11-06 09:11:59.471018] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:24.340 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.340 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.340 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.340 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.340 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.340 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.340 09:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.340 09:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.340 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.340 09:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.340 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.340 "name": "raid_bdev1", 00:19:24.340 "uuid": "b624605f-f21f-46bc-ba5b-c5ad5b5fe154", 00:19:24.340 "strip_size_kb": 0, 00:19:24.341 "state": "online", 00:19:24.341 "raid_level": "raid1", 00:19:24.341 "superblock": false, 00:19:24.341 "num_base_bdevs": 4, 00:19:24.341 "num_base_bdevs_discovered": 4, 00:19:24.341 "num_base_bdevs_operational": 4, 00:19:24.341 "process": { 00:19:24.341 "type": "rebuild", 00:19:24.341 "target": "spare", 00:19:24.341 "progress": { 00:19:24.341 "blocks": 20480, 00:19:24.341 "percent": 31 00:19:24.341 } 00:19:24.341 }, 00:19:24.341 "base_bdevs_list": [ 00:19:24.341 { 00:19:24.341 "name": "spare", 00:19:24.341 "uuid": "d593b24d-e7b3-59f0-8dbe-c8208919a15f", 00:19:24.341 "is_configured": true, 00:19:24.341 "data_offset": 0, 00:19:24.341 "data_size": 65536 00:19:24.341 }, 00:19:24.341 { 00:19:24.341 "name": "BaseBdev2", 00:19:24.341 "uuid": "9165e752-a152-5a7d-81ef-312884e8eaca", 00:19:24.341 "is_configured": true, 00:19:24.341 "data_offset": 0, 00:19:24.341 "data_size": 65536 00:19:24.341 }, 00:19:24.341 { 00:19:24.341 "name": "BaseBdev3", 00:19:24.341 "uuid": "ea521155-a452-5615-a23c-a1fe97c89475", 00:19:24.341 "is_configured": true, 00:19:24.341 "data_offset": 0, 00:19:24.341 "data_size": 65536 00:19:24.341 }, 00:19:24.341 { 00:19:24.341 "name": "BaseBdev4", 00:19:24.341 "uuid": "e06eb281-f008-59d5-9b06-bdbc0de20b68", 00:19:24.341 "is_configured": true, 00:19:24.341 "data_offset": 0, 00:19:24.341 "data_size": 65536 00:19:24.341 } 00:19:24.341 ] 00:19:24.341 }' 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.341 [2024-11-06 09:12:00.640437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:24.341 [2024-11-06 09:12:00.679779] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.341 "name": "raid_bdev1", 00:19:24.341 "uuid": "b624605f-f21f-46bc-ba5b-c5ad5b5fe154", 00:19:24.341 "strip_size_kb": 0, 00:19:24.341 "state": "online", 00:19:24.341 "raid_level": "raid1", 00:19:24.341 "superblock": false, 00:19:24.341 "num_base_bdevs": 4, 00:19:24.341 "num_base_bdevs_discovered": 3, 00:19:24.341 "num_base_bdevs_operational": 3, 00:19:24.341 "process": { 00:19:24.341 "type": "rebuild", 00:19:24.341 "target": "spare", 00:19:24.341 "progress": { 00:19:24.341 "blocks": 24576, 00:19:24.341 "percent": 37 00:19:24.341 } 00:19:24.341 }, 00:19:24.341 "base_bdevs_list": [ 00:19:24.341 { 00:19:24.341 "name": "spare", 00:19:24.341 "uuid": "d593b24d-e7b3-59f0-8dbe-c8208919a15f", 00:19:24.341 "is_configured": true, 00:19:24.341 "data_offset": 0, 00:19:24.341 "data_size": 65536 00:19:24.341 }, 00:19:24.341 { 00:19:24.341 "name": null, 00:19:24.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.341 "is_configured": false, 00:19:24.341 "data_offset": 0, 00:19:24.341 "data_size": 65536 00:19:24.341 }, 00:19:24.341 { 00:19:24.341 "name": "BaseBdev3", 00:19:24.341 "uuid": "ea521155-a452-5615-a23c-a1fe97c89475", 00:19:24.341 "is_configured": true, 00:19:24.341 "data_offset": 0, 00:19:24.341 "data_size": 65536 00:19:24.341 }, 00:19:24.341 { 00:19:24.341 "name": "BaseBdev4", 00:19:24.341 "uuid": "e06eb281-f008-59d5-9b06-bdbc0de20b68", 00:19:24.341 "is_configured": true, 00:19:24.341 "data_offset": 0, 00:19:24.341 "data_size": 65536 00:19:24.341 } 00:19:24.341 ] 00:19:24.341 }' 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=480 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.341 09:12:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.599 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.599 "name": "raid_bdev1", 00:19:24.599 "uuid": "b624605f-f21f-46bc-ba5b-c5ad5b5fe154", 00:19:24.599 "strip_size_kb": 0, 00:19:24.599 "state": "online", 00:19:24.599 "raid_level": "raid1", 00:19:24.599 "superblock": false, 00:19:24.599 "num_base_bdevs": 4, 00:19:24.599 "num_base_bdevs_discovered": 3, 00:19:24.599 "num_base_bdevs_operational": 3, 00:19:24.599 "process": { 00:19:24.599 "type": "rebuild", 00:19:24.599 "target": "spare", 00:19:24.599 "progress": { 00:19:24.599 "blocks": 26624, 00:19:24.599 "percent": 40 00:19:24.599 } 00:19:24.599 }, 00:19:24.599 "base_bdevs_list": [ 00:19:24.599 { 00:19:24.599 "name": "spare", 00:19:24.599 "uuid": "d593b24d-e7b3-59f0-8dbe-c8208919a15f", 00:19:24.599 "is_configured": true, 00:19:24.599 "data_offset": 0, 00:19:24.599 "data_size": 65536 00:19:24.599 }, 00:19:24.599 { 00:19:24.599 "name": null, 00:19:24.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.599 "is_configured": false, 00:19:24.599 "data_offset": 0, 00:19:24.599 "data_size": 65536 00:19:24.599 }, 00:19:24.599 { 00:19:24.599 "name": "BaseBdev3", 00:19:24.599 "uuid": "ea521155-a452-5615-a23c-a1fe97c89475", 00:19:24.599 "is_configured": true, 00:19:24.599 "data_offset": 0, 00:19:24.599 "data_size": 65536 00:19:24.599 }, 00:19:24.599 { 00:19:24.599 "name": "BaseBdev4", 00:19:24.599 "uuid": "e06eb281-f008-59d5-9b06-bdbc0de20b68", 00:19:24.599 "is_configured": true, 00:19:24.599 "data_offset": 0, 00:19:24.599 "data_size": 65536 00:19:24.599 } 00:19:24.599 ] 00:19:24.599 }' 00:19:24.599 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.599 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:24.599 09:12:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.599 09:12:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.599 09:12:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.559 "name": "raid_bdev1", 00:19:25.559 "uuid": "b624605f-f21f-46bc-ba5b-c5ad5b5fe154", 00:19:25.559 "strip_size_kb": 0, 00:19:25.559 "state": "online", 00:19:25.559 "raid_level": "raid1", 00:19:25.559 "superblock": false, 00:19:25.559 "num_base_bdevs": 4, 00:19:25.559 "num_base_bdevs_discovered": 3, 00:19:25.559 "num_base_bdevs_operational": 3, 00:19:25.559 "process": { 00:19:25.559 "type": "rebuild", 00:19:25.559 "target": "spare", 00:19:25.559 "progress": { 00:19:25.559 "blocks": 51200, 00:19:25.559 "percent": 78 00:19:25.559 } 00:19:25.559 }, 00:19:25.559 "base_bdevs_list": [ 00:19:25.559 { 00:19:25.559 "name": "spare", 00:19:25.559 "uuid": "d593b24d-e7b3-59f0-8dbe-c8208919a15f", 00:19:25.559 "is_configured": true, 00:19:25.559 "data_offset": 0, 00:19:25.559 "data_size": 65536 00:19:25.559 }, 00:19:25.559 { 00:19:25.559 "name": null, 00:19:25.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.559 "is_configured": false, 00:19:25.559 "data_offset": 0, 00:19:25.559 "data_size": 65536 00:19:25.559 }, 00:19:25.559 { 00:19:25.559 "name": "BaseBdev3", 00:19:25.559 "uuid": "ea521155-a452-5615-a23c-a1fe97c89475", 00:19:25.559 "is_configured": true, 00:19:25.559 "data_offset": 0, 00:19:25.559 "data_size": 65536 00:19:25.559 }, 00:19:25.559 { 00:19:25.559 "name": "BaseBdev4", 00:19:25.559 "uuid": "e06eb281-f008-59d5-9b06-bdbc0de20b68", 00:19:25.559 "is_configured": true, 00:19:25.559 "data_offset": 0, 00:19:25.559 "data_size": 65536 00:19:25.559 } 00:19:25.559 ] 00:19:25.559 }' 00:19:25.559 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.822 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.822 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.822 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.822 09:12:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:26.388 [2024-11-06 09:12:02.694426] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:26.388 [2024-11-06 09:12:02.694621] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:26.388 [2024-11-06 09:12:02.694715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.956 "name": "raid_bdev1", 00:19:26.956 "uuid": "b624605f-f21f-46bc-ba5b-c5ad5b5fe154", 00:19:26.956 "strip_size_kb": 0, 00:19:26.956 "state": "online", 00:19:26.956 "raid_level": "raid1", 00:19:26.956 "superblock": false, 00:19:26.956 "num_base_bdevs": 4, 00:19:26.956 "num_base_bdevs_discovered": 3, 00:19:26.956 "num_base_bdevs_operational": 3, 00:19:26.956 "base_bdevs_list": [ 00:19:26.956 { 00:19:26.956 "name": "spare", 00:19:26.956 "uuid": "d593b24d-e7b3-59f0-8dbe-c8208919a15f", 00:19:26.956 "is_configured": true, 00:19:26.956 "data_offset": 0, 00:19:26.956 "data_size": 65536 00:19:26.956 }, 00:19:26.956 { 00:19:26.956 "name": null, 00:19:26.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.956 "is_configured": false, 00:19:26.956 "data_offset": 0, 00:19:26.956 "data_size": 65536 00:19:26.956 }, 00:19:26.956 { 00:19:26.956 "name": "BaseBdev3", 00:19:26.956 "uuid": "ea521155-a452-5615-a23c-a1fe97c89475", 00:19:26.956 "is_configured": true, 00:19:26.956 "data_offset": 0, 00:19:26.956 "data_size": 65536 00:19:26.956 }, 00:19:26.956 { 00:19:26.956 "name": "BaseBdev4", 00:19:26.956 "uuid": "e06eb281-f008-59d5-9b06-bdbc0de20b68", 00:19:26.956 "is_configured": true, 00:19:26.956 "data_offset": 0, 00:19:26.956 "data_size": 65536 00:19:26.956 } 00:19:26.956 ] 00:19:26.956 }' 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.956 "name": "raid_bdev1", 00:19:26.956 "uuid": "b624605f-f21f-46bc-ba5b-c5ad5b5fe154", 00:19:26.956 "strip_size_kb": 0, 00:19:26.956 "state": "online", 00:19:26.956 "raid_level": "raid1", 00:19:26.956 "superblock": false, 00:19:26.956 "num_base_bdevs": 4, 00:19:26.956 "num_base_bdevs_discovered": 3, 00:19:26.956 "num_base_bdevs_operational": 3, 00:19:26.956 "base_bdevs_list": [ 00:19:26.956 { 00:19:26.956 "name": "spare", 00:19:26.956 "uuid": "d593b24d-e7b3-59f0-8dbe-c8208919a15f", 00:19:26.956 "is_configured": true, 00:19:26.956 "data_offset": 0, 00:19:26.956 "data_size": 65536 00:19:26.956 }, 00:19:26.956 { 00:19:26.956 "name": null, 00:19:26.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.956 "is_configured": false, 00:19:26.956 "data_offset": 0, 00:19:26.956 "data_size": 65536 00:19:26.956 }, 00:19:26.956 { 00:19:26.956 "name": "BaseBdev3", 00:19:26.956 "uuid": "ea521155-a452-5615-a23c-a1fe97c89475", 00:19:26.956 "is_configured": true, 00:19:26.956 "data_offset": 0, 00:19:26.956 "data_size": 65536 00:19:26.956 }, 00:19:26.956 { 00:19:26.956 "name": "BaseBdev4", 00:19:26.956 "uuid": "e06eb281-f008-59d5-9b06-bdbc0de20b68", 00:19:26.956 "is_configured": true, 00:19:26.956 "data_offset": 0, 00:19:26.956 "data_size": 65536 00:19:26.956 } 00:19:26.956 ] 00:19:26.956 }' 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.956 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.215 "name": "raid_bdev1", 00:19:27.215 "uuid": "b624605f-f21f-46bc-ba5b-c5ad5b5fe154", 00:19:27.215 "strip_size_kb": 0, 00:19:27.215 "state": "online", 00:19:27.215 "raid_level": "raid1", 00:19:27.215 "superblock": false, 00:19:27.215 "num_base_bdevs": 4, 00:19:27.215 "num_base_bdevs_discovered": 3, 00:19:27.215 "num_base_bdevs_operational": 3, 00:19:27.215 "base_bdevs_list": [ 00:19:27.215 { 00:19:27.215 "name": "spare", 00:19:27.215 "uuid": "d593b24d-e7b3-59f0-8dbe-c8208919a15f", 00:19:27.215 "is_configured": true, 00:19:27.215 "data_offset": 0, 00:19:27.215 "data_size": 65536 00:19:27.215 }, 00:19:27.215 { 00:19:27.215 "name": null, 00:19:27.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.215 "is_configured": false, 00:19:27.215 "data_offset": 0, 00:19:27.215 "data_size": 65536 00:19:27.215 }, 00:19:27.215 { 00:19:27.215 "name": "BaseBdev3", 00:19:27.215 "uuid": "ea521155-a452-5615-a23c-a1fe97c89475", 00:19:27.215 "is_configured": true, 00:19:27.215 "data_offset": 0, 00:19:27.215 "data_size": 65536 00:19:27.215 }, 00:19:27.215 { 00:19:27.215 "name": "BaseBdev4", 00:19:27.215 "uuid": "e06eb281-f008-59d5-9b06-bdbc0de20b68", 00:19:27.215 "is_configured": true, 00:19:27.215 "data_offset": 0, 00:19:27.215 "data_size": 65536 00:19:27.215 } 00:19:27.215 ] 00:19:27.215 }' 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.215 09:12:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.473 09:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:27.473 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.473 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.731 [2024-11-06 09:12:04.010509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:27.731 [2024-11-06 09:12:04.010682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:27.731 [2024-11-06 09:12:04.010914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:27.731 [2024-11-06 09:12:04.011146] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:27.731 [2024-11-06 09:12:04.011328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:27.731 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:27.989 /dev/nbd0 00:19:27.989 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:27.989 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:27.989 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:27.989 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:27.989 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:27.989 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:27.989 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:27.990 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:27.990 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:27.990 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:27.990 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.990 1+0 records in 00:19:27.990 1+0 records out 00:19:27.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031431 s, 13.0 MB/s 00:19:27.990 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.990 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:27.990 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.990 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:27.990 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:27.990 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.990 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:27.990 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:28.248 /dev/nbd1 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.248 1+0 records in 00:19:28.248 1+0 records out 00:19:28.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392083 s, 10.4 MB/s 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:28.248 09:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:28.506 09:12:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:28.506 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:28.506 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:28.506 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:28.506 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:28.506 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.506 09:12:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:28.764 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:28.764 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:28.764 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:28.764 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:28.764 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:28.765 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:28.765 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:28.765 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:28.765 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:28.765 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77708 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 77708 ']' 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 77708 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77708 00:19:29.023 killing process with pid 77708 00:19:29.023 Received shutdown signal, test time was about 60.000000 seconds 00:19:29.023 00:19:29.023 Latency(us) 00:19:29.023 [2024-11-06T09:12:05.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.023 [2024-11-06T09:12:05.558Z] =================================================================================================================== 00:19:29.023 [2024-11-06T09:12:05.558Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77708' 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 77708 00:19:29.023 [2024-11-06 09:12:05.515676] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:29.023 09:12:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 77708 00:19:29.590 [2024-11-06 09:12:05.949107] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:30.524 09:12:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:30.524 00:19:30.524 real 0m20.637s 00:19:30.524 user 0m23.360s 00:19:30.524 sys 0m3.635s 00:19:30.524 09:12:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:30.524 ************************************ 00:19:30.524 END TEST raid_rebuild_test 00:19:30.524 ************************************ 00:19:30.524 09:12:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.524 09:12:07 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:19:30.524 09:12:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:19:30.524 09:12:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:30.524 09:12:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:30.524 ************************************ 00:19:30.524 START TEST raid_rebuild_test_sb 00:19:30.525 ************************************ 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78186 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78186 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78186 ']' 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:30.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:30.525 09:12:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.783 [2024-11-06 09:12:07.141366] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:19:30.784 [2024-11-06 09:12:07.141742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:19:30.784 Zero copy mechanism will not be used. 00:19:30.784 -allocations --file-prefix=spdk_pid78186 ] 00:19:31.041 [2024-11-06 09:12:07.333049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.041 [2024-11-06 09:12:07.493425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.300 [2024-11-06 09:12:07.730033] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.300 [2024-11-06 09:12:07.730286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.866 BaseBdev1_malloc 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.866 [2024-11-06 09:12:08.162727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:31.866 [2024-11-06 09:12:08.162983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.866 [2024-11-06 09:12:08.163147] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:31.866 [2024-11-06 09:12:08.163318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.866 [2024-11-06 09:12:08.166405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.866 BaseBdev1 00:19:31.866 [2024-11-06 09:12:08.166595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.866 BaseBdev2_malloc 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.866 [2024-11-06 09:12:08.211386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:31.866 [2024-11-06 09:12:08.211603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.866 [2024-11-06 09:12:08.211685] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:31.866 [2024-11-06 09:12:08.211872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.866 [2024-11-06 09:12:08.214787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.866 [2024-11-06 09:12:08.214862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:31.866 BaseBdev2 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.866 BaseBdev3_malloc 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.866 [2024-11-06 09:12:08.276447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:31.866 [2024-11-06 09:12:08.276653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.866 [2024-11-06 09:12:08.276805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:31.866 [2024-11-06 09:12:08.276961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.866 [2024-11-06 09:12:08.279758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.866 [2024-11-06 09:12:08.279830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:31.866 BaseBdev3 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.866 BaseBdev4_malloc 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.866 [2024-11-06 09:12:08.329011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:31.866 [2024-11-06 09:12:08.329205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.866 [2024-11-06 09:12:08.329244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:31.866 [2024-11-06 09:12:08.329264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.866 [2024-11-06 09:12:08.332052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.866 [2024-11-06 09:12:08.332108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:31.866 BaseBdev4 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.866 spare_malloc 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.866 spare_delay 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.866 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.866 [2024-11-06 09:12:08.389190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:31.867 [2024-11-06 09:12:08.389396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.867 [2024-11-06 09:12:08.389437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:31.867 [2024-11-06 09:12:08.389457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.867 [2024-11-06 09:12:08.392305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.867 [2024-11-06 09:12:08.392359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:31.867 spare 00:19:31.867 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.867 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:31.867 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.867 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.867 [2024-11-06 09:12:08.397333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.125 [2024-11-06 09:12:08.399929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:32.125 [2024-11-06 09:12:08.400028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:32.125 [2024-11-06 09:12:08.400112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:32.125 [2024-11-06 09:12:08.400352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:32.125 [2024-11-06 09:12:08.400380] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:32.125 [2024-11-06 09:12:08.400695] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:32.125 [2024-11-06 09:12:08.400953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:32.125 [2024-11-06 09:12:08.400972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:32.125 [2024-11-06 09:12:08.401238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.125 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.126 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.126 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.126 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.126 "name": "raid_bdev1", 00:19:32.126 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:32.126 "strip_size_kb": 0, 00:19:32.126 "state": "online", 00:19:32.126 "raid_level": "raid1", 00:19:32.126 "superblock": true, 00:19:32.126 "num_base_bdevs": 4, 00:19:32.126 "num_base_bdevs_discovered": 4, 00:19:32.126 "num_base_bdevs_operational": 4, 00:19:32.126 "base_bdevs_list": [ 00:19:32.126 { 00:19:32.126 "name": "BaseBdev1", 00:19:32.126 "uuid": "3c57bc8d-9a87-5b8e-bc2c-5c07bceef602", 00:19:32.126 "is_configured": true, 00:19:32.126 "data_offset": 2048, 00:19:32.126 "data_size": 63488 00:19:32.126 }, 00:19:32.126 { 00:19:32.126 "name": "BaseBdev2", 00:19:32.126 "uuid": "6e307173-5811-5f90-b6fc-95f99b9ac450", 00:19:32.126 "is_configured": true, 00:19:32.126 "data_offset": 2048, 00:19:32.126 "data_size": 63488 00:19:32.126 }, 00:19:32.126 { 00:19:32.126 "name": "BaseBdev3", 00:19:32.126 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:32.126 "is_configured": true, 00:19:32.126 "data_offset": 2048, 00:19:32.126 "data_size": 63488 00:19:32.126 }, 00:19:32.126 { 00:19:32.126 "name": "BaseBdev4", 00:19:32.126 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:32.126 "is_configured": true, 00:19:32.126 "data_offset": 2048, 00:19:32.126 "data_size": 63488 00:19:32.126 } 00:19:32.126 ] 00:19:32.126 }' 00:19:32.126 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.126 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.692 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:32.692 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:32.692 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.692 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.692 [2024-11-06 09:12:08.929892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.692 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.692 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:19:32.692 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:32.692 09:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.692 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.692 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.692 09:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:32.692 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:32.950 [2024-11-06 09:12:09.261623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:32.950 /dev/nbd0 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:32.950 1+0 records in 00:19:32.950 1+0 records out 00:19:32.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028333 s, 14.5 MB/s 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:32.950 09:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:41.072 63488+0 records in 00:19:41.072 63488+0 records out 00:19:41.072 32505856 bytes (33 MB, 31 MiB) copied, 8.18975 s, 4.0 MB/s 00:19:41.072 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:41.072 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.072 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:41.072 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.072 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:41.072 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.072 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:41.339 [2024-11-06 09:12:17.801092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.339 [2024-11-06 09:12:17.837210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.339 09:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.597 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.597 "name": "raid_bdev1", 00:19:41.597 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:41.597 "strip_size_kb": 0, 00:19:41.597 "state": "online", 00:19:41.597 "raid_level": "raid1", 00:19:41.597 "superblock": true, 00:19:41.597 "num_base_bdevs": 4, 00:19:41.597 "num_base_bdevs_discovered": 3, 00:19:41.597 "num_base_bdevs_operational": 3, 00:19:41.597 "base_bdevs_list": [ 00:19:41.597 { 00:19:41.597 "name": null, 00:19:41.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.597 "is_configured": false, 00:19:41.597 "data_offset": 0, 00:19:41.597 "data_size": 63488 00:19:41.597 }, 00:19:41.597 { 00:19:41.597 "name": "BaseBdev2", 00:19:41.597 "uuid": "6e307173-5811-5f90-b6fc-95f99b9ac450", 00:19:41.597 "is_configured": true, 00:19:41.597 "data_offset": 2048, 00:19:41.597 "data_size": 63488 00:19:41.597 }, 00:19:41.597 { 00:19:41.597 "name": "BaseBdev3", 00:19:41.597 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:41.597 "is_configured": true, 00:19:41.597 "data_offset": 2048, 00:19:41.597 "data_size": 63488 00:19:41.597 }, 00:19:41.597 { 00:19:41.597 "name": "BaseBdev4", 00:19:41.597 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:41.597 "is_configured": true, 00:19:41.597 "data_offset": 2048, 00:19:41.597 "data_size": 63488 00:19:41.597 } 00:19:41.597 ] 00:19:41.597 }' 00:19:41.597 09:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.597 09:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.854 09:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:41.854 09:12:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.854 09:12:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:41.854 [2024-11-06 09:12:18.337367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:41.855 [2024-11-06 09:12:18.352018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:19:41.855 09:12:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.855 09:12:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:41.855 [2024-11-06 09:12:18.354610] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.229 "name": "raid_bdev1", 00:19:43.229 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:43.229 "strip_size_kb": 0, 00:19:43.229 "state": "online", 00:19:43.229 "raid_level": "raid1", 00:19:43.229 "superblock": true, 00:19:43.229 "num_base_bdevs": 4, 00:19:43.229 "num_base_bdevs_discovered": 4, 00:19:43.229 "num_base_bdevs_operational": 4, 00:19:43.229 "process": { 00:19:43.229 "type": "rebuild", 00:19:43.229 "target": "spare", 00:19:43.229 "progress": { 00:19:43.229 "blocks": 20480, 00:19:43.229 "percent": 32 00:19:43.229 } 00:19:43.229 }, 00:19:43.229 "base_bdevs_list": [ 00:19:43.229 { 00:19:43.229 "name": "spare", 00:19:43.229 "uuid": "e0ea59f5-4e25-5f38-88cc-e1371032a22c", 00:19:43.229 "is_configured": true, 00:19:43.229 "data_offset": 2048, 00:19:43.229 "data_size": 63488 00:19:43.229 }, 00:19:43.229 { 00:19:43.229 "name": "BaseBdev2", 00:19:43.229 "uuid": "6e307173-5811-5f90-b6fc-95f99b9ac450", 00:19:43.229 "is_configured": true, 00:19:43.229 "data_offset": 2048, 00:19:43.229 "data_size": 63488 00:19:43.229 }, 00:19:43.229 { 00:19:43.229 "name": "BaseBdev3", 00:19:43.229 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:43.229 "is_configured": true, 00:19:43.229 "data_offset": 2048, 00:19:43.229 "data_size": 63488 00:19:43.229 }, 00:19:43.229 { 00:19:43.229 "name": "BaseBdev4", 00:19:43.229 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:43.229 "is_configured": true, 00:19:43.229 "data_offset": 2048, 00:19:43.229 "data_size": 63488 00:19:43.229 } 00:19:43.229 ] 00:19:43.229 }' 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.229 [2024-11-06 09:12:19.519762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.229 [2024-11-06 09:12:19.563743] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:43.229 [2024-11-06 09:12:19.563992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.229 [2024-11-06 09:12:19.564025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.229 [2024-11-06 09:12:19.564042] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.229 "name": "raid_bdev1", 00:19:43.229 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:43.229 "strip_size_kb": 0, 00:19:43.229 "state": "online", 00:19:43.229 "raid_level": "raid1", 00:19:43.229 "superblock": true, 00:19:43.229 "num_base_bdevs": 4, 00:19:43.229 "num_base_bdevs_discovered": 3, 00:19:43.229 "num_base_bdevs_operational": 3, 00:19:43.229 "base_bdevs_list": [ 00:19:43.229 { 00:19:43.229 "name": null, 00:19:43.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.229 "is_configured": false, 00:19:43.229 "data_offset": 0, 00:19:43.229 "data_size": 63488 00:19:43.229 }, 00:19:43.229 { 00:19:43.229 "name": "BaseBdev2", 00:19:43.229 "uuid": "6e307173-5811-5f90-b6fc-95f99b9ac450", 00:19:43.229 "is_configured": true, 00:19:43.229 "data_offset": 2048, 00:19:43.229 "data_size": 63488 00:19:43.229 }, 00:19:43.229 { 00:19:43.229 "name": "BaseBdev3", 00:19:43.229 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:43.229 "is_configured": true, 00:19:43.229 "data_offset": 2048, 00:19:43.229 "data_size": 63488 00:19:43.229 }, 00:19:43.229 { 00:19:43.229 "name": "BaseBdev4", 00:19:43.229 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:43.229 "is_configured": true, 00:19:43.229 "data_offset": 2048, 00:19:43.229 "data_size": 63488 00:19:43.229 } 00:19:43.229 ] 00:19:43.229 }' 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.229 09:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.826 "name": "raid_bdev1", 00:19:43.826 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:43.826 "strip_size_kb": 0, 00:19:43.826 "state": "online", 00:19:43.826 "raid_level": "raid1", 00:19:43.826 "superblock": true, 00:19:43.826 "num_base_bdevs": 4, 00:19:43.826 "num_base_bdevs_discovered": 3, 00:19:43.826 "num_base_bdevs_operational": 3, 00:19:43.826 "base_bdevs_list": [ 00:19:43.826 { 00:19:43.826 "name": null, 00:19:43.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.826 "is_configured": false, 00:19:43.826 "data_offset": 0, 00:19:43.826 "data_size": 63488 00:19:43.826 }, 00:19:43.826 { 00:19:43.826 "name": "BaseBdev2", 00:19:43.826 "uuid": "6e307173-5811-5f90-b6fc-95f99b9ac450", 00:19:43.826 "is_configured": true, 00:19:43.826 "data_offset": 2048, 00:19:43.826 "data_size": 63488 00:19:43.826 }, 00:19:43.826 { 00:19:43.826 "name": "BaseBdev3", 00:19:43.826 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:43.826 "is_configured": true, 00:19:43.826 "data_offset": 2048, 00:19:43.826 "data_size": 63488 00:19:43.826 }, 00:19:43.826 { 00:19:43.826 "name": "BaseBdev4", 00:19:43.826 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:43.826 "is_configured": true, 00:19:43.826 "data_offset": 2048, 00:19:43.826 "data_size": 63488 00:19:43.826 } 00:19:43.826 ] 00:19:43.826 }' 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.826 [2024-11-06 09:12:20.231932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:43.826 [2024-11-06 09:12:20.245289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.826 09:12:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:43.826 [2024-11-06 09:12:20.247959] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:44.760 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:44.760 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.760 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:44.760 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:44.760 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.760 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.760 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.760 09:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.760 09:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.760 09:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.018 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.018 "name": "raid_bdev1", 00:19:45.018 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:45.018 "strip_size_kb": 0, 00:19:45.018 "state": "online", 00:19:45.018 "raid_level": "raid1", 00:19:45.018 "superblock": true, 00:19:45.018 "num_base_bdevs": 4, 00:19:45.018 "num_base_bdevs_discovered": 4, 00:19:45.018 "num_base_bdevs_operational": 4, 00:19:45.018 "process": { 00:19:45.018 "type": "rebuild", 00:19:45.018 "target": "spare", 00:19:45.018 "progress": { 00:19:45.018 "blocks": 20480, 00:19:45.018 "percent": 32 00:19:45.018 } 00:19:45.018 }, 00:19:45.018 "base_bdevs_list": [ 00:19:45.018 { 00:19:45.018 "name": "spare", 00:19:45.019 "uuid": "e0ea59f5-4e25-5f38-88cc-e1371032a22c", 00:19:45.019 "is_configured": true, 00:19:45.019 "data_offset": 2048, 00:19:45.019 "data_size": 63488 00:19:45.019 }, 00:19:45.019 { 00:19:45.019 "name": "BaseBdev2", 00:19:45.019 "uuid": "6e307173-5811-5f90-b6fc-95f99b9ac450", 00:19:45.019 "is_configured": true, 00:19:45.019 "data_offset": 2048, 00:19:45.019 "data_size": 63488 00:19:45.019 }, 00:19:45.019 { 00:19:45.019 "name": "BaseBdev3", 00:19:45.019 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:45.019 "is_configured": true, 00:19:45.019 "data_offset": 2048, 00:19:45.019 "data_size": 63488 00:19:45.019 }, 00:19:45.019 { 00:19:45.019 "name": "BaseBdev4", 00:19:45.019 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:45.019 "is_configured": true, 00:19:45.019 "data_offset": 2048, 00:19:45.019 "data_size": 63488 00:19:45.019 } 00:19:45.019 ] 00:19:45.019 }' 00:19:45.019 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.019 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.019 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.019 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.019 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:45.019 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:45.019 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:45.019 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:45.019 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:45.019 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:19:45.019 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:45.019 09:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.019 09:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.019 [2024-11-06 09:12:21.405340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:45.277 [2024-11-06 09:12:21.556562] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.277 "name": "raid_bdev1", 00:19:45.277 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:45.277 "strip_size_kb": 0, 00:19:45.277 "state": "online", 00:19:45.277 "raid_level": "raid1", 00:19:45.277 "superblock": true, 00:19:45.277 "num_base_bdevs": 4, 00:19:45.277 "num_base_bdevs_discovered": 3, 00:19:45.277 "num_base_bdevs_operational": 3, 00:19:45.277 "process": { 00:19:45.277 "type": "rebuild", 00:19:45.277 "target": "spare", 00:19:45.277 "progress": { 00:19:45.277 "blocks": 24576, 00:19:45.277 "percent": 38 00:19:45.277 } 00:19:45.277 }, 00:19:45.277 "base_bdevs_list": [ 00:19:45.277 { 00:19:45.277 "name": "spare", 00:19:45.277 "uuid": "e0ea59f5-4e25-5f38-88cc-e1371032a22c", 00:19:45.277 "is_configured": true, 00:19:45.277 "data_offset": 2048, 00:19:45.277 "data_size": 63488 00:19:45.277 }, 00:19:45.277 { 00:19:45.277 "name": null, 00:19:45.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.277 "is_configured": false, 00:19:45.277 "data_offset": 0, 00:19:45.277 "data_size": 63488 00:19:45.277 }, 00:19:45.277 { 00:19:45.277 "name": "BaseBdev3", 00:19:45.277 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:45.277 "is_configured": true, 00:19:45.277 "data_offset": 2048, 00:19:45.277 "data_size": 63488 00:19:45.277 }, 00:19:45.277 { 00:19:45.277 "name": "BaseBdev4", 00:19:45.277 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:45.277 "is_configured": true, 00:19:45.277 "data_offset": 2048, 00:19:45.277 "data_size": 63488 00:19:45.277 } 00:19:45.277 ] 00:19:45.277 }' 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=501 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.277 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.277 "name": "raid_bdev1", 00:19:45.277 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:45.277 "strip_size_kb": 0, 00:19:45.277 "state": "online", 00:19:45.277 "raid_level": "raid1", 00:19:45.277 "superblock": true, 00:19:45.277 "num_base_bdevs": 4, 00:19:45.277 "num_base_bdevs_discovered": 3, 00:19:45.277 "num_base_bdevs_operational": 3, 00:19:45.277 "process": { 00:19:45.278 "type": "rebuild", 00:19:45.278 "target": "spare", 00:19:45.278 "progress": { 00:19:45.278 "blocks": 26624, 00:19:45.278 "percent": 41 00:19:45.278 } 00:19:45.278 }, 00:19:45.278 "base_bdevs_list": [ 00:19:45.278 { 00:19:45.278 "name": "spare", 00:19:45.278 "uuid": "e0ea59f5-4e25-5f38-88cc-e1371032a22c", 00:19:45.278 "is_configured": true, 00:19:45.278 "data_offset": 2048, 00:19:45.278 "data_size": 63488 00:19:45.278 }, 00:19:45.278 { 00:19:45.278 "name": null, 00:19:45.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.278 "is_configured": false, 00:19:45.278 "data_offset": 0, 00:19:45.278 "data_size": 63488 00:19:45.278 }, 00:19:45.278 { 00:19:45.278 "name": "BaseBdev3", 00:19:45.278 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:45.278 "is_configured": true, 00:19:45.278 "data_offset": 2048, 00:19:45.278 "data_size": 63488 00:19:45.278 }, 00:19:45.278 { 00:19:45.278 "name": "BaseBdev4", 00:19:45.278 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:45.278 "is_configured": true, 00:19:45.278 "data_offset": 2048, 00:19:45.278 "data_size": 63488 00:19:45.278 } 00:19:45.278 ] 00:19:45.278 }' 00:19:45.278 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.536 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.536 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.536 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.536 09:12:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.471 "name": "raid_bdev1", 00:19:46.471 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:46.471 "strip_size_kb": 0, 00:19:46.471 "state": "online", 00:19:46.471 "raid_level": "raid1", 00:19:46.471 "superblock": true, 00:19:46.471 "num_base_bdevs": 4, 00:19:46.471 "num_base_bdevs_discovered": 3, 00:19:46.471 "num_base_bdevs_operational": 3, 00:19:46.471 "process": { 00:19:46.471 "type": "rebuild", 00:19:46.471 "target": "spare", 00:19:46.471 "progress": { 00:19:46.471 "blocks": 51200, 00:19:46.471 "percent": 80 00:19:46.471 } 00:19:46.471 }, 00:19:46.471 "base_bdevs_list": [ 00:19:46.471 { 00:19:46.471 "name": "spare", 00:19:46.471 "uuid": "e0ea59f5-4e25-5f38-88cc-e1371032a22c", 00:19:46.471 "is_configured": true, 00:19:46.471 "data_offset": 2048, 00:19:46.471 "data_size": 63488 00:19:46.471 }, 00:19:46.471 { 00:19:46.471 "name": null, 00:19:46.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.471 "is_configured": false, 00:19:46.471 "data_offset": 0, 00:19:46.471 "data_size": 63488 00:19:46.471 }, 00:19:46.471 { 00:19:46.471 "name": "BaseBdev3", 00:19:46.471 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:46.471 "is_configured": true, 00:19:46.471 "data_offset": 2048, 00:19:46.471 "data_size": 63488 00:19:46.471 }, 00:19:46.471 { 00:19:46.471 "name": "BaseBdev4", 00:19:46.471 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:46.471 "is_configured": true, 00:19:46.471 "data_offset": 2048, 00:19:46.471 "data_size": 63488 00:19:46.471 } 00:19:46.471 ] 00:19:46.471 }' 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:46.471 09:12:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.729 09:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.729 09:12:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:46.988 [2024-11-06 09:12:23.470489] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:46.988 [2024-11-06 09:12:23.470601] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:46.988 [2024-11-06 09:12:23.470763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.608 "name": "raid_bdev1", 00:19:47.608 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:47.608 "strip_size_kb": 0, 00:19:47.608 "state": "online", 00:19:47.608 "raid_level": "raid1", 00:19:47.608 "superblock": true, 00:19:47.608 "num_base_bdevs": 4, 00:19:47.608 "num_base_bdevs_discovered": 3, 00:19:47.608 "num_base_bdevs_operational": 3, 00:19:47.608 "base_bdevs_list": [ 00:19:47.608 { 00:19:47.608 "name": "spare", 00:19:47.608 "uuid": "e0ea59f5-4e25-5f38-88cc-e1371032a22c", 00:19:47.608 "is_configured": true, 00:19:47.608 "data_offset": 2048, 00:19:47.608 "data_size": 63488 00:19:47.608 }, 00:19:47.608 { 00:19:47.608 "name": null, 00:19:47.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.608 "is_configured": false, 00:19:47.608 "data_offset": 0, 00:19:47.608 "data_size": 63488 00:19:47.608 }, 00:19:47.608 { 00:19:47.608 "name": "BaseBdev3", 00:19:47.608 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:47.608 "is_configured": true, 00:19:47.608 "data_offset": 2048, 00:19:47.608 "data_size": 63488 00:19:47.608 }, 00:19:47.608 { 00:19:47.608 "name": "BaseBdev4", 00:19:47.608 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:47.608 "is_configured": true, 00:19:47.608 "data_offset": 2048, 00:19:47.608 "data_size": 63488 00:19:47.608 } 00:19:47.608 ] 00:19:47.608 }' 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:47.608 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.867 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.867 "name": "raid_bdev1", 00:19:47.867 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:47.867 "strip_size_kb": 0, 00:19:47.867 "state": "online", 00:19:47.867 "raid_level": "raid1", 00:19:47.867 "superblock": true, 00:19:47.867 "num_base_bdevs": 4, 00:19:47.867 "num_base_bdevs_discovered": 3, 00:19:47.867 "num_base_bdevs_operational": 3, 00:19:47.867 "base_bdevs_list": [ 00:19:47.867 { 00:19:47.867 "name": "spare", 00:19:47.867 "uuid": "e0ea59f5-4e25-5f38-88cc-e1371032a22c", 00:19:47.867 "is_configured": true, 00:19:47.867 "data_offset": 2048, 00:19:47.867 "data_size": 63488 00:19:47.867 }, 00:19:47.867 { 00:19:47.867 "name": null, 00:19:47.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.868 "is_configured": false, 00:19:47.868 "data_offset": 0, 00:19:47.868 "data_size": 63488 00:19:47.868 }, 00:19:47.868 { 00:19:47.868 "name": "BaseBdev3", 00:19:47.868 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:47.868 "is_configured": true, 00:19:47.868 "data_offset": 2048, 00:19:47.868 "data_size": 63488 00:19:47.868 }, 00:19:47.868 { 00:19:47.868 "name": "BaseBdev4", 00:19:47.868 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:47.868 "is_configured": true, 00:19:47.868 "data_offset": 2048, 00:19:47.868 "data_size": 63488 00:19:47.868 } 00:19:47.868 ] 00:19:47.868 }' 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.868 "name": "raid_bdev1", 00:19:47.868 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:47.868 "strip_size_kb": 0, 00:19:47.868 "state": "online", 00:19:47.868 "raid_level": "raid1", 00:19:47.868 "superblock": true, 00:19:47.868 "num_base_bdevs": 4, 00:19:47.868 "num_base_bdevs_discovered": 3, 00:19:47.868 "num_base_bdevs_operational": 3, 00:19:47.868 "base_bdevs_list": [ 00:19:47.868 { 00:19:47.868 "name": "spare", 00:19:47.868 "uuid": "e0ea59f5-4e25-5f38-88cc-e1371032a22c", 00:19:47.868 "is_configured": true, 00:19:47.868 "data_offset": 2048, 00:19:47.868 "data_size": 63488 00:19:47.868 }, 00:19:47.868 { 00:19:47.868 "name": null, 00:19:47.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.868 "is_configured": false, 00:19:47.868 "data_offset": 0, 00:19:47.868 "data_size": 63488 00:19:47.868 }, 00:19:47.868 { 00:19:47.868 "name": "BaseBdev3", 00:19:47.868 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:47.868 "is_configured": true, 00:19:47.868 "data_offset": 2048, 00:19:47.868 "data_size": 63488 00:19:47.868 }, 00:19:47.868 { 00:19:47.868 "name": "BaseBdev4", 00:19:47.868 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:47.868 "is_configured": true, 00:19:47.868 "data_offset": 2048, 00:19:47.868 "data_size": 63488 00:19:47.868 } 00:19:47.868 ] 00:19:47.868 }' 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.868 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.478 [2024-11-06 09:12:24.843012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:48.478 [2024-11-06 09:12:24.843188] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.478 [2024-11-06 09:12:24.843308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.478 [2024-11-06 09:12:24.843415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.478 [2024-11-06 09:12:24.843433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:48.478 09:12:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:48.736 /dev/nbd0 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.736 1+0 records in 00:19:48.736 1+0 records out 00:19:48.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355418 s, 11.5 MB/s 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:48.736 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:49.303 /dev/nbd1 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:49.303 1+0 records in 00:19:49.303 1+0 records out 00:19:49.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378667 s, 10.8 MB/s 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.303 09:12:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:49.562 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:49.562 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:49.562 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:49.562 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.562 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.562 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:49.562 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:49.562 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.562 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.562 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.131 [2024-11-06 09:12:26.439670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:50.131 [2024-11-06 09:12:26.439905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:50.131 [2024-11-06 09:12:26.439954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:50.131 [2024-11-06 09:12:26.439972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:50.131 [2024-11-06 09:12:26.442944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:50.131 [2024-11-06 09:12:26.443116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:50.131 [2024-11-06 09:12:26.443251] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:50.131 [2024-11-06 09:12:26.443316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.131 [2024-11-06 09:12:26.443501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:50.131 [2024-11-06 09:12:26.443671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:50.131 spare 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.131 [2024-11-06 09:12:26.543871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:50.131 [2024-11-06 09:12:26.544089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:50.131 [2024-11-06 09:12:26.544509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:19:50.131 [2024-11-06 09:12:26.544776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:50.131 [2024-11-06 09:12:26.544800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:50.131 [2024-11-06 09:12:26.545042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.131 "name": "raid_bdev1", 00:19:50.131 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:50.131 "strip_size_kb": 0, 00:19:50.131 "state": "online", 00:19:50.131 "raid_level": "raid1", 00:19:50.131 "superblock": true, 00:19:50.131 "num_base_bdevs": 4, 00:19:50.131 "num_base_bdevs_discovered": 3, 00:19:50.131 "num_base_bdevs_operational": 3, 00:19:50.131 "base_bdevs_list": [ 00:19:50.131 { 00:19:50.131 "name": "spare", 00:19:50.131 "uuid": "e0ea59f5-4e25-5f38-88cc-e1371032a22c", 00:19:50.131 "is_configured": true, 00:19:50.131 "data_offset": 2048, 00:19:50.131 "data_size": 63488 00:19:50.131 }, 00:19:50.131 { 00:19:50.131 "name": null, 00:19:50.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.131 "is_configured": false, 00:19:50.131 "data_offset": 2048, 00:19:50.131 "data_size": 63488 00:19:50.131 }, 00:19:50.131 { 00:19:50.131 "name": "BaseBdev3", 00:19:50.131 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:50.131 "is_configured": true, 00:19:50.131 "data_offset": 2048, 00:19:50.131 "data_size": 63488 00:19:50.131 }, 00:19:50.131 { 00:19:50.131 "name": "BaseBdev4", 00:19:50.131 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:50.131 "is_configured": true, 00:19:50.131 "data_offset": 2048, 00:19:50.131 "data_size": 63488 00:19:50.131 } 00:19:50.131 ] 00:19:50.131 }' 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.131 09:12:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.698 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.698 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.698 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:50.698 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:50.698 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.698 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.698 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.698 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.698 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.698 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.698 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.698 "name": "raid_bdev1", 00:19:50.699 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:50.699 "strip_size_kb": 0, 00:19:50.699 "state": "online", 00:19:50.699 "raid_level": "raid1", 00:19:50.699 "superblock": true, 00:19:50.699 "num_base_bdevs": 4, 00:19:50.699 "num_base_bdevs_discovered": 3, 00:19:50.699 "num_base_bdevs_operational": 3, 00:19:50.699 "base_bdevs_list": [ 00:19:50.699 { 00:19:50.699 "name": "spare", 00:19:50.699 "uuid": "e0ea59f5-4e25-5f38-88cc-e1371032a22c", 00:19:50.699 "is_configured": true, 00:19:50.699 "data_offset": 2048, 00:19:50.699 "data_size": 63488 00:19:50.699 }, 00:19:50.699 { 00:19:50.699 "name": null, 00:19:50.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.699 "is_configured": false, 00:19:50.699 "data_offset": 2048, 00:19:50.699 "data_size": 63488 00:19:50.699 }, 00:19:50.699 { 00:19:50.699 "name": "BaseBdev3", 00:19:50.699 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:50.699 "is_configured": true, 00:19:50.699 "data_offset": 2048, 00:19:50.699 "data_size": 63488 00:19:50.699 }, 00:19:50.699 { 00:19:50.699 "name": "BaseBdev4", 00:19:50.699 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:50.699 "is_configured": true, 00:19:50.699 "data_offset": 2048, 00:19:50.699 "data_size": 63488 00:19:50.699 } 00:19:50.699 ] 00:19:50.699 }' 00:19:50.699 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.699 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:50.699 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.699 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:50.699 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:50.699 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.699 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.699 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.699 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.957 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.958 [2024-11-06 09:12:27.236097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.958 "name": "raid_bdev1", 00:19:50.958 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:50.958 "strip_size_kb": 0, 00:19:50.958 "state": "online", 00:19:50.958 "raid_level": "raid1", 00:19:50.958 "superblock": true, 00:19:50.958 "num_base_bdevs": 4, 00:19:50.958 "num_base_bdevs_discovered": 2, 00:19:50.958 "num_base_bdevs_operational": 2, 00:19:50.958 "base_bdevs_list": [ 00:19:50.958 { 00:19:50.958 "name": null, 00:19:50.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.958 "is_configured": false, 00:19:50.958 "data_offset": 0, 00:19:50.958 "data_size": 63488 00:19:50.958 }, 00:19:50.958 { 00:19:50.958 "name": null, 00:19:50.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.958 "is_configured": false, 00:19:50.958 "data_offset": 2048, 00:19:50.958 "data_size": 63488 00:19:50.958 }, 00:19:50.958 { 00:19:50.958 "name": "BaseBdev3", 00:19:50.958 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:50.958 "is_configured": true, 00:19:50.958 "data_offset": 2048, 00:19:50.958 "data_size": 63488 00:19:50.958 }, 00:19:50.958 { 00:19:50.958 "name": "BaseBdev4", 00:19:50.958 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:50.958 "is_configured": true, 00:19:50.958 "data_offset": 2048, 00:19:50.958 "data_size": 63488 00:19:50.958 } 00:19:50.958 ] 00:19:50.958 }' 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.958 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.525 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:51.526 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.526 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.526 [2024-11-06 09:12:27.800263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.526 [2024-11-06 09:12:27.800635] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:51.526 [2024-11-06 09:12:27.800669] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:51.526 [2024-11-06 09:12:27.800723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.526 [2024-11-06 09:12:27.813844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:19:51.526 09:12:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.526 09:12:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:51.526 [2024-11-06 09:12:27.816332] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:52.513 09:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.514 "name": "raid_bdev1", 00:19:52.514 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:52.514 "strip_size_kb": 0, 00:19:52.514 "state": "online", 00:19:52.514 "raid_level": "raid1", 00:19:52.514 "superblock": true, 00:19:52.514 "num_base_bdevs": 4, 00:19:52.514 "num_base_bdevs_discovered": 3, 00:19:52.514 "num_base_bdevs_operational": 3, 00:19:52.514 "process": { 00:19:52.514 "type": "rebuild", 00:19:52.514 "target": "spare", 00:19:52.514 "progress": { 00:19:52.514 "blocks": 20480, 00:19:52.514 "percent": 32 00:19:52.514 } 00:19:52.514 }, 00:19:52.514 "base_bdevs_list": [ 00:19:52.514 { 00:19:52.514 "name": "spare", 00:19:52.514 "uuid": "e0ea59f5-4e25-5f38-88cc-e1371032a22c", 00:19:52.514 "is_configured": true, 00:19:52.514 "data_offset": 2048, 00:19:52.514 "data_size": 63488 00:19:52.514 }, 00:19:52.514 { 00:19:52.514 "name": null, 00:19:52.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.514 "is_configured": false, 00:19:52.514 "data_offset": 2048, 00:19:52.514 "data_size": 63488 00:19:52.514 }, 00:19:52.514 { 00:19:52.514 "name": "BaseBdev3", 00:19:52.514 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:52.514 "is_configured": true, 00:19:52.514 "data_offset": 2048, 00:19:52.514 "data_size": 63488 00:19:52.514 }, 00:19:52.514 { 00:19:52.514 "name": "BaseBdev4", 00:19:52.514 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:52.514 "is_configured": true, 00:19:52.514 "data_offset": 2048, 00:19:52.514 "data_size": 63488 00:19:52.514 } 00:19:52.514 ] 00:19:52.514 }' 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.514 09:12:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.514 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.514 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:52.514 09:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.514 09:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.514 [2024-11-06 09:12:29.025535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.772 [2024-11-06 09:12:29.126578] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:52.772 [2024-11-06 09:12:29.126984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.772 [2024-11-06 09:12:29.127024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.772 [2024-11-06 09:12:29.127038] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.772 "name": "raid_bdev1", 00:19:52.772 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:52.772 "strip_size_kb": 0, 00:19:52.772 "state": "online", 00:19:52.772 "raid_level": "raid1", 00:19:52.772 "superblock": true, 00:19:52.772 "num_base_bdevs": 4, 00:19:52.772 "num_base_bdevs_discovered": 2, 00:19:52.772 "num_base_bdevs_operational": 2, 00:19:52.772 "base_bdevs_list": [ 00:19:52.772 { 00:19:52.772 "name": null, 00:19:52.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.772 "is_configured": false, 00:19:52.772 "data_offset": 0, 00:19:52.772 "data_size": 63488 00:19:52.772 }, 00:19:52.772 { 00:19:52.772 "name": null, 00:19:52.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.772 "is_configured": false, 00:19:52.772 "data_offset": 2048, 00:19:52.772 "data_size": 63488 00:19:52.772 }, 00:19:52.772 { 00:19:52.772 "name": "BaseBdev3", 00:19:52.772 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:52.772 "is_configured": true, 00:19:52.772 "data_offset": 2048, 00:19:52.772 "data_size": 63488 00:19:52.772 }, 00:19:52.772 { 00:19:52.772 "name": "BaseBdev4", 00:19:52.772 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:52.772 "is_configured": true, 00:19:52.772 "data_offset": 2048, 00:19:52.772 "data_size": 63488 00:19:52.772 } 00:19:52.772 ] 00:19:52.772 }' 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.772 09:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.336 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:53.336 09:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.336 09:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.336 [2024-11-06 09:12:29.675294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:53.336 [2024-11-06 09:12:29.675532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.336 [2024-11-06 09:12:29.675587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:53.336 [2024-11-06 09:12:29.675604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.336 [2024-11-06 09:12:29.676228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.336 [2024-11-06 09:12:29.676272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:53.336 [2024-11-06 09:12:29.676402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:53.336 [2024-11-06 09:12:29.676423] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:19:53.336 [2024-11-06 09:12:29.676443] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:53.336 [2024-11-06 09:12:29.676484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.336 [2024-11-06 09:12:29.689848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:19:53.336 spare 00:19:53.336 09:12:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.336 09:12:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:53.336 [2024-11-06 09:12:29.692736] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.271 "name": "raid_bdev1", 00:19:54.271 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:54.271 "strip_size_kb": 0, 00:19:54.271 "state": "online", 00:19:54.271 "raid_level": "raid1", 00:19:54.271 "superblock": true, 00:19:54.271 "num_base_bdevs": 4, 00:19:54.271 "num_base_bdevs_discovered": 3, 00:19:54.271 "num_base_bdevs_operational": 3, 00:19:54.271 "process": { 00:19:54.271 "type": "rebuild", 00:19:54.271 "target": "spare", 00:19:54.271 "progress": { 00:19:54.271 "blocks": 20480, 00:19:54.271 "percent": 32 00:19:54.271 } 00:19:54.271 }, 00:19:54.271 "base_bdevs_list": [ 00:19:54.271 { 00:19:54.271 "name": "spare", 00:19:54.271 "uuid": "e0ea59f5-4e25-5f38-88cc-e1371032a22c", 00:19:54.271 "is_configured": true, 00:19:54.271 "data_offset": 2048, 00:19:54.271 "data_size": 63488 00:19:54.271 }, 00:19:54.271 { 00:19:54.271 "name": null, 00:19:54.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.271 "is_configured": false, 00:19:54.271 "data_offset": 2048, 00:19:54.271 "data_size": 63488 00:19:54.271 }, 00:19:54.271 { 00:19:54.271 "name": "BaseBdev3", 00:19:54.271 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:54.271 "is_configured": true, 00:19:54.271 "data_offset": 2048, 00:19:54.271 "data_size": 63488 00:19:54.271 }, 00:19:54.271 { 00:19:54.271 "name": "BaseBdev4", 00:19:54.271 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:54.271 "is_configured": true, 00:19:54.271 "data_offset": 2048, 00:19:54.271 "data_size": 63488 00:19:54.271 } 00:19:54.271 ] 00:19:54.271 }' 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.271 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.529 [2024-11-06 09:12:30.865794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.529 [2024-11-06 09:12:30.901573] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:54.529 [2024-11-06 09:12:30.901977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.529 [2024-11-06 09:12:30.902010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:54.529 [2024-11-06 09:12:30.902027] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:54.529 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:54.530 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.530 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.530 09:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.530 09:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.530 09:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.530 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:54.530 "name": "raid_bdev1", 00:19:54.530 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:54.530 "strip_size_kb": 0, 00:19:54.530 "state": "online", 00:19:54.530 "raid_level": "raid1", 00:19:54.530 "superblock": true, 00:19:54.530 "num_base_bdevs": 4, 00:19:54.530 "num_base_bdevs_discovered": 2, 00:19:54.530 "num_base_bdevs_operational": 2, 00:19:54.530 "base_bdevs_list": [ 00:19:54.530 { 00:19:54.530 "name": null, 00:19:54.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.530 "is_configured": false, 00:19:54.530 "data_offset": 0, 00:19:54.530 "data_size": 63488 00:19:54.530 }, 00:19:54.530 { 00:19:54.530 "name": null, 00:19:54.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.530 "is_configured": false, 00:19:54.530 "data_offset": 2048, 00:19:54.530 "data_size": 63488 00:19:54.530 }, 00:19:54.530 { 00:19:54.530 "name": "BaseBdev3", 00:19:54.530 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:54.530 "is_configured": true, 00:19:54.530 "data_offset": 2048, 00:19:54.530 "data_size": 63488 00:19:54.530 }, 00:19:54.530 { 00:19:54.530 "name": "BaseBdev4", 00:19:54.530 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:54.530 "is_configured": true, 00:19:54.530 "data_offset": 2048, 00:19:54.530 "data_size": 63488 00:19:54.530 } 00:19:54.530 ] 00:19:54.530 }' 00:19:54.530 09:12:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:54.530 09:12:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.095 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.095 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.095 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:55.095 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:55.095 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.095 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.095 09:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.095 09:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.095 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.095 09:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.095 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.095 "name": "raid_bdev1", 00:19:55.095 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:55.095 "strip_size_kb": 0, 00:19:55.095 "state": "online", 00:19:55.095 "raid_level": "raid1", 00:19:55.095 "superblock": true, 00:19:55.095 "num_base_bdevs": 4, 00:19:55.095 "num_base_bdevs_discovered": 2, 00:19:55.095 "num_base_bdevs_operational": 2, 00:19:55.095 "base_bdevs_list": [ 00:19:55.095 { 00:19:55.095 "name": null, 00:19:55.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.096 "is_configured": false, 00:19:55.096 "data_offset": 0, 00:19:55.096 "data_size": 63488 00:19:55.096 }, 00:19:55.096 { 00:19:55.096 "name": null, 00:19:55.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.096 "is_configured": false, 00:19:55.096 "data_offset": 2048, 00:19:55.096 "data_size": 63488 00:19:55.096 }, 00:19:55.096 { 00:19:55.096 "name": "BaseBdev3", 00:19:55.096 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:55.096 "is_configured": true, 00:19:55.096 "data_offset": 2048, 00:19:55.096 "data_size": 63488 00:19:55.096 }, 00:19:55.096 { 00:19:55.096 "name": "BaseBdev4", 00:19:55.096 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:55.096 "is_configured": true, 00:19:55.096 "data_offset": 2048, 00:19:55.096 "data_size": 63488 00:19:55.096 } 00:19:55.096 ] 00:19:55.096 }' 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.096 [2024-11-06 09:12:31.597792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:55.096 [2024-11-06 09:12:31.598026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:55.096 [2024-11-06 09:12:31.598101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:55.096 [2024-11-06 09:12:31.598229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:55.096 [2024-11-06 09:12:31.598889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:55.096 [2024-11-06 09:12:31.599046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:55.096 [2024-11-06 09:12:31.599164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:55.096 [2024-11-06 09:12:31.599191] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:55.096 [2024-11-06 09:12:31.599214] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:55.096 [2024-11-06 09:12:31.599233] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:55.096 BaseBdev1 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.096 09:12:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.468 "name": "raid_bdev1", 00:19:56.468 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:56.468 "strip_size_kb": 0, 00:19:56.468 "state": "online", 00:19:56.468 "raid_level": "raid1", 00:19:56.468 "superblock": true, 00:19:56.468 "num_base_bdevs": 4, 00:19:56.468 "num_base_bdevs_discovered": 2, 00:19:56.468 "num_base_bdevs_operational": 2, 00:19:56.468 "base_bdevs_list": [ 00:19:56.468 { 00:19:56.468 "name": null, 00:19:56.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.468 "is_configured": false, 00:19:56.468 "data_offset": 0, 00:19:56.468 "data_size": 63488 00:19:56.468 }, 00:19:56.468 { 00:19:56.468 "name": null, 00:19:56.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.468 "is_configured": false, 00:19:56.468 "data_offset": 2048, 00:19:56.468 "data_size": 63488 00:19:56.468 }, 00:19:56.468 { 00:19:56.468 "name": "BaseBdev3", 00:19:56.468 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:56.468 "is_configured": true, 00:19:56.468 "data_offset": 2048, 00:19:56.468 "data_size": 63488 00:19:56.468 }, 00:19:56.468 { 00:19:56.468 "name": "BaseBdev4", 00:19:56.468 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:56.468 "is_configured": true, 00:19:56.468 "data_offset": 2048, 00:19:56.468 "data_size": 63488 00:19:56.468 } 00:19:56.468 ] 00:19:56.468 }' 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.468 09:12:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.727 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.727 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.727 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:56.727 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:56.727 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.727 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.727 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.727 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.727 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.727 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.727 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.727 "name": "raid_bdev1", 00:19:56.727 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:56.727 "strip_size_kb": 0, 00:19:56.727 "state": "online", 00:19:56.727 "raid_level": "raid1", 00:19:56.727 "superblock": true, 00:19:56.727 "num_base_bdevs": 4, 00:19:56.727 "num_base_bdevs_discovered": 2, 00:19:56.727 "num_base_bdevs_operational": 2, 00:19:56.727 "base_bdevs_list": [ 00:19:56.727 { 00:19:56.727 "name": null, 00:19:56.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.727 "is_configured": false, 00:19:56.727 "data_offset": 0, 00:19:56.727 "data_size": 63488 00:19:56.727 }, 00:19:56.727 { 00:19:56.727 "name": null, 00:19:56.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.727 "is_configured": false, 00:19:56.727 "data_offset": 2048, 00:19:56.727 "data_size": 63488 00:19:56.727 }, 00:19:56.727 { 00:19:56.727 "name": "BaseBdev3", 00:19:56.727 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:56.727 "is_configured": true, 00:19:56.727 "data_offset": 2048, 00:19:56.727 "data_size": 63488 00:19:56.727 }, 00:19:56.727 { 00:19:56.727 "name": "BaseBdev4", 00:19:56.727 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:56.727 "is_configured": true, 00:19:56.727 "data_offset": 2048, 00:19:56.727 "data_size": 63488 00:19:56.727 } 00:19:56.727 ] 00:19:56.727 }' 00:19:56.727 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.986 [2024-11-06 09:12:33.350453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:56.986 [2024-11-06 09:12:33.350885] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:19:56.986 [2024-11-06 09:12:33.350914] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:56.986 request: 00:19:56.986 { 00:19:56.986 "base_bdev": "BaseBdev1", 00:19:56.986 "raid_bdev": "raid_bdev1", 00:19:56.986 "method": "bdev_raid_add_base_bdev", 00:19:56.986 "req_id": 1 00:19:56.986 } 00:19:56.986 Got JSON-RPC error response 00:19:56.986 response: 00:19:56.986 { 00:19:56.986 "code": -22, 00:19:56.986 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:56.986 } 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:56.986 09:12:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.975 "name": "raid_bdev1", 00:19:57.975 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:57.975 "strip_size_kb": 0, 00:19:57.975 "state": "online", 00:19:57.975 "raid_level": "raid1", 00:19:57.975 "superblock": true, 00:19:57.975 "num_base_bdevs": 4, 00:19:57.975 "num_base_bdevs_discovered": 2, 00:19:57.975 "num_base_bdevs_operational": 2, 00:19:57.975 "base_bdevs_list": [ 00:19:57.975 { 00:19:57.975 "name": null, 00:19:57.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.975 "is_configured": false, 00:19:57.975 "data_offset": 0, 00:19:57.975 "data_size": 63488 00:19:57.975 }, 00:19:57.975 { 00:19:57.975 "name": null, 00:19:57.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.975 "is_configured": false, 00:19:57.975 "data_offset": 2048, 00:19:57.975 "data_size": 63488 00:19:57.975 }, 00:19:57.975 { 00:19:57.975 "name": "BaseBdev3", 00:19:57.975 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:57.975 "is_configured": true, 00:19:57.975 "data_offset": 2048, 00:19:57.975 "data_size": 63488 00:19:57.975 }, 00:19:57.975 { 00:19:57.975 "name": "BaseBdev4", 00:19:57.975 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:57.975 "is_configured": true, 00:19:57.975 "data_offset": 2048, 00:19:57.975 "data_size": 63488 00:19:57.975 } 00:19:57.975 ] 00:19:57.975 }' 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.975 09:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.542 "name": "raid_bdev1", 00:19:58.542 "uuid": "02fbb8ae-4676-402c-93a1-0468f235b1c6", 00:19:58.542 "strip_size_kb": 0, 00:19:58.542 "state": "online", 00:19:58.542 "raid_level": "raid1", 00:19:58.542 "superblock": true, 00:19:58.542 "num_base_bdevs": 4, 00:19:58.542 "num_base_bdevs_discovered": 2, 00:19:58.542 "num_base_bdevs_operational": 2, 00:19:58.542 "base_bdevs_list": [ 00:19:58.542 { 00:19:58.542 "name": null, 00:19:58.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.542 "is_configured": false, 00:19:58.542 "data_offset": 0, 00:19:58.542 "data_size": 63488 00:19:58.542 }, 00:19:58.542 { 00:19:58.542 "name": null, 00:19:58.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.542 "is_configured": false, 00:19:58.542 "data_offset": 2048, 00:19:58.542 "data_size": 63488 00:19:58.542 }, 00:19:58.542 { 00:19:58.542 "name": "BaseBdev3", 00:19:58.542 "uuid": "9781104c-4b55-5379-96b3-42a08f002509", 00:19:58.542 "is_configured": true, 00:19:58.542 "data_offset": 2048, 00:19:58.542 "data_size": 63488 00:19:58.542 }, 00:19:58.542 { 00:19:58.542 "name": "BaseBdev4", 00:19:58.542 "uuid": "2b30be22-5756-5da6-a393-a09c0857b264", 00:19:58.542 "is_configured": true, 00:19:58.542 "data_offset": 2048, 00:19:58.542 "data_size": 63488 00:19:58.542 } 00:19:58.542 ] 00:19:58.542 }' 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.542 09:12:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.542 09:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.542 09:12:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78186 00:19:58.542 09:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78186 ']' 00:19:58.542 09:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 78186 00:19:58.542 09:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:19:58.542 09:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:58.542 09:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78186 00:19:58.542 killing process with pid 78186 00:19:58.542 Received shutdown signal, test time was about 60.000000 seconds 00:19:58.542 00:19:58.542 Latency(us) 00:19:58.542 [2024-11-06T09:12:35.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.542 [2024-11-06T09:12:35.077Z] =================================================================================================================== 00:19:58.542 [2024-11-06T09:12:35.077Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.542 09:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:58.542 09:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:58.542 09:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78186' 00:19:58.542 09:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 78186 00:19:58.542 [2024-11-06 09:12:35.055355] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:58.542 09:12:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 78186 00:19:58.542 [2024-11-06 09:12:35.055497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:58.542 [2024-11-06 09:12:35.055588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:58.542 [2024-11-06 09:12:35.055605] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:59.108 [2024-11-06 09:12:35.490080] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.042 09:12:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:00.042 00:20:00.042 real 0m29.502s 00:20:00.042 user 0m35.994s 00:20:00.042 sys 0m4.267s 00:20:00.042 ************************************ 00:20:00.042 END TEST raid_rebuild_test_sb 00:20:00.042 ************************************ 00:20:00.042 09:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:00.042 09:12:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.042 09:12:36 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:20:00.042 09:12:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:00.042 09:12:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:00.042 09:12:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:00.301 ************************************ 00:20:00.301 START TEST raid_rebuild_test_io 00:20:00.301 ************************************ 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:00.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78980 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78980 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 78980 ']' 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:00.301 09:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:00.301 [2024-11-06 09:12:36.694597] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:20:00.301 [2024-11-06 09:12:36.695026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78980 ] 00:20:00.301 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:00.301 Zero copy mechanism will not be used. 00:20:00.560 [2024-11-06 09:12:36.881967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.560 [2024-11-06 09:12:37.047831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.818 [2024-11-06 09:12:37.251241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:00.818 [2024-11-06 09:12:37.251463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.386 BaseBdev1_malloc 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.386 [2024-11-06 09:12:37.737886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:01.386 [2024-11-06 09:12:37.737969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.386 [2024-11-06 09:12:37.738003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:01.386 [2024-11-06 09:12:37.738023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.386 [2024-11-06 09:12:37.740757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.386 [2024-11-06 09:12:37.740811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:01.386 BaseBdev1 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.386 BaseBdev2_malloc 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.386 [2024-11-06 09:12:37.789763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:01.386 [2024-11-06 09:12:37.789990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.386 [2024-11-06 09:12:37.790067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:01.386 [2024-11-06 09:12:37.790285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.386 [2024-11-06 09:12:37.793118] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.386 BaseBdev2 00:20:01.386 [2024-11-06 09:12:37.793291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.386 BaseBdev3_malloc 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.386 [2024-11-06 09:12:37.852202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:01.386 [2024-11-06 09:12:37.852403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.386 [2024-11-06 09:12:37.852480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:01.386 [2024-11-06 09:12:37.852507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.386 [2024-11-06 09:12:37.855248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.386 [2024-11-06 09:12:37.855302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:01.386 BaseBdev3 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.386 BaseBdev4_malloc 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.386 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.386 [2024-11-06 09:12:37.899958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:01.386 [2024-11-06 09:12:37.900155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.387 [2024-11-06 09:12:37.900227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:01.387 [2024-11-06 09:12:37.900354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.387 [2024-11-06 09:12:37.903079] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.387 BaseBdev4 00:20:01.387 [2024-11-06 09:12:37.903246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:01.387 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.387 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:01.387 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.387 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.646 spare_malloc 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.646 spare_delay 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.646 [2024-11-06 09:12:37.955801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:01.646 [2024-11-06 09:12:37.955889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.646 [2024-11-06 09:12:37.955920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:01.646 [2024-11-06 09:12:37.955938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.646 [2024-11-06 09:12:37.958652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.646 spare 00:20:01.646 [2024-11-06 09:12:37.958846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.646 [2024-11-06 09:12:37.963892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:01.646 [2024-11-06 09:12:37.966271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:01.646 [2024-11-06 09:12:37.966496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:01.646 [2024-11-06 09:12:37.966605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:01.646 [2024-11-06 09:12:37.966724] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:01.646 [2024-11-06 09:12:37.966748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:01.646 [2024-11-06 09:12:37.967093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:01.646 [2024-11-06 09:12:37.967316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:01.646 [2024-11-06 09:12:37.967337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:01.646 [2024-11-06 09:12:37.967527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.646 09:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.646 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.646 "name": "raid_bdev1", 00:20:01.646 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:01.646 "strip_size_kb": 0, 00:20:01.646 "state": "online", 00:20:01.646 "raid_level": "raid1", 00:20:01.646 "superblock": false, 00:20:01.646 "num_base_bdevs": 4, 00:20:01.646 "num_base_bdevs_discovered": 4, 00:20:01.646 "num_base_bdevs_operational": 4, 00:20:01.646 "base_bdevs_list": [ 00:20:01.646 { 00:20:01.646 "name": "BaseBdev1", 00:20:01.647 "uuid": "e8bf6242-9131-53d9-aa5c-c8a2d20a79ee", 00:20:01.647 "is_configured": true, 00:20:01.647 "data_offset": 0, 00:20:01.647 "data_size": 65536 00:20:01.647 }, 00:20:01.647 { 00:20:01.647 "name": "BaseBdev2", 00:20:01.647 "uuid": "f1373107-63aa-5c78-b941-961cdd2f0b94", 00:20:01.647 "is_configured": true, 00:20:01.647 "data_offset": 0, 00:20:01.647 "data_size": 65536 00:20:01.647 }, 00:20:01.647 { 00:20:01.647 "name": "BaseBdev3", 00:20:01.647 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:01.647 "is_configured": true, 00:20:01.647 "data_offset": 0, 00:20:01.647 "data_size": 65536 00:20:01.647 }, 00:20:01.647 { 00:20:01.647 "name": "BaseBdev4", 00:20:01.647 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:01.647 "is_configured": true, 00:20:01.647 "data_offset": 0, 00:20:01.647 "data_size": 65536 00:20:01.647 } 00:20:01.647 ] 00:20:01.647 }' 00:20:01.647 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.647 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.214 [2024-11-06 09:12:38.516417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.214 [2024-11-06 09:12:38.623982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.214 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:02.215 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.215 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.215 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.215 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.215 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.215 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.215 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.215 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.215 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.215 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.215 "name": "raid_bdev1", 00:20:02.215 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:02.215 "strip_size_kb": 0, 00:20:02.215 "state": "online", 00:20:02.215 "raid_level": "raid1", 00:20:02.215 "superblock": false, 00:20:02.215 "num_base_bdevs": 4, 00:20:02.215 "num_base_bdevs_discovered": 3, 00:20:02.215 "num_base_bdevs_operational": 3, 00:20:02.215 "base_bdevs_list": [ 00:20:02.215 { 00:20:02.215 "name": null, 00:20:02.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.215 "is_configured": false, 00:20:02.215 "data_offset": 0, 00:20:02.215 "data_size": 65536 00:20:02.215 }, 00:20:02.215 { 00:20:02.215 "name": "BaseBdev2", 00:20:02.215 "uuid": "f1373107-63aa-5c78-b941-961cdd2f0b94", 00:20:02.215 "is_configured": true, 00:20:02.215 "data_offset": 0, 00:20:02.215 "data_size": 65536 00:20:02.215 }, 00:20:02.215 { 00:20:02.215 "name": "BaseBdev3", 00:20:02.215 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:02.215 "is_configured": true, 00:20:02.215 "data_offset": 0, 00:20:02.215 "data_size": 65536 00:20:02.215 }, 00:20:02.215 { 00:20:02.215 "name": "BaseBdev4", 00:20:02.215 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:02.215 "is_configured": true, 00:20:02.215 "data_offset": 0, 00:20:02.215 "data_size": 65536 00:20:02.215 } 00:20:02.215 ] 00:20:02.215 }' 00:20:02.215 09:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.215 09:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.474 [2024-11-06 09:12:38.752154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:02.474 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:02.474 Zero copy mechanism will not be used. 00:20:02.474 Running I/O for 60 seconds... 00:20:02.733 09:12:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:02.733 09:12:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.733 09:12:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:02.733 [2024-11-06 09:12:39.135142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:02.733 09:12:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.733 09:12:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:02.733 [2024-11-06 09:12:39.192301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:20:02.733 [2024-11-06 09:12:39.195071] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:02.991 [2024-11-06 09:12:39.305123] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:02.991 [2024-11-06 09:12:39.305984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:03.250 [2024-11-06 09:12:39.533491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:03.508 163.00 IOPS, 489.00 MiB/s [2024-11-06T09:12:40.043Z] [2024-11-06 09:12:39.803328] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:03.767 [2024-11-06 09:12:40.046695] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.767 "name": "raid_bdev1", 00:20:03.767 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:03.767 "strip_size_kb": 0, 00:20:03.767 "state": "online", 00:20:03.767 "raid_level": "raid1", 00:20:03.767 "superblock": false, 00:20:03.767 "num_base_bdevs": 4, 00:20:03.767 "num_base_bdevs_discovered": 4, 00:20:03.767 "num_base_bdevs_operational": 4, 00:20:03.767 "process": { 00:20:03.767 "type": "rebuild", 00:20:03.767 "target": "spare", 00:20:03.767 "progress": { 00:20:03.767 "blocks": 10240, 00:20:03.767 "percent": 15 00:20:03.767 } 00:20:03.767 }, 00:20:03.767 "base_bdevs_list": [ 00:20:03.767 { 00:20:03.767 "name": "spare", 00:20:03.767 "uuid": "ce6d6efe-896d-548e-9e91-7c437449b662", 00:20:03.767 "is_configured": true, 00:20:03.767 "data_offset": 0, 00:20:03.767 "data_size": 65536 00:20:03.767 }, 00:20:03.767 { 00:20:03.767 "name": "BaseBdev2", 00:20:03.767 "uuid": "f1373107-63aa-5c78-b941-961cdd2f0b94", 00:20:03.767 "is_configured": true, 00:20:03.767 "data_offset": 0, 00:20:03.767 "data_size": 65536 00:20:03.767 }, 00:20:03.767 { 00:20:03.767 "name": "BaseBdev3", 00:20:03.767 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:03.767 "is_configured": true, 00:20:03.767 "data_offset": 0, 00:20:03.767 "data_size": 65536 00:20:03.767 }, 00:20:03.767 { 00:20:03.767 "name": "BaseBdev4", 00:20:03.767 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:03.767 "is_configured": true, 00:20:03.767 "data_offset": 0, 00:20:03.767 "data_size": 65536 00:20:03.767 } 00:20:03.767 ] 00:20:03.767 }' 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:03.767 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.026 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:04.026 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:04.026 09:12:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.026 09:12:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:04.026 [2024-11-06 09:12:40.348521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:04.026 [2024-11-06 09:12:40.407530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:04.026 [2024-11-06 09:12:40.409361] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:04.026 [2024-11-06 09:12:40.519946] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:04.026 [2024-11-06 09:12:40.524096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.026 [2024-11-06 09:12:40.524274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:04.026 [2024-11-06 09:12:40.524330] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:04.026 [2024-11-06 09:12:40.548935] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.285 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.285 "name": "raid_bdev1", 00:20:04.285 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:04.285 "strip_size_kb": 0, 00:20:04.285 "state": "online", 00:20:04.285 "raid_level": "raid1", 00:20:04.285 "superblock": false, 00:20:04.285 "num_base_bdevs": 4, 00:20:04.285 "num_base_bdevs_discovered": 3, 00:20:04.285 "num_base_bdevs_operational": 3, 00:20:04.285 "base_bdevs_list": [ 00:20:04.285 { 00:20:04.285 "name": null, 00:20:04.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.285 "is_configured": false, 00:20:04.285 "data_offset": 0, 00:20:04.285 "data_size": 65536 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "name": "BaseBdev2", 00:20:04.285 "uuid": "f1373107-63aa-5c78-b941-961cdd2f0b94", 00:20:04.285 "is_configured": true, 00:20:04.285 "data_offset": 0, 00:20:04.285 "data_size": 65536 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "name": "BaseBdev3", 00:20:04.285 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:04.285 "is_configured": true, 00:20:04.285 "data_offset": 0, 00:20:04.285 "data_size": 65536 00:20:04.285 }, 00:20:04.285 { 00:20:04.285 "name": "BaseBdev4", 00:20:04.285 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:04.285 "is_configured": true, 00:20:04.285 "data_offset": 0, 00:20:04.286 "data_size": 65536 00:20:04.286 } 00:20:04.286 ] 00:20:04.286 }' 00:20:04.286 09:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.286 09:12:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:04.544 134.00 IOPS, 402.00 MiB/s [2024-11-06T09:12:41.079Z] 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:04.544 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.544 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:04.544 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:04.544 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.544 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.544 09:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.544 09:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:04.544 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.803 09:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.803 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.803 "name": "raid_bdev1", 00:20:04.804 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:04.804 "strip_size_kb": 0, 00:20:04.804 "state": "online", 00:20:04.804 "raid_level": "raid1", 00:20:04.804 "superblock": false, 00:20:04.804 "num_base_bdevs": 4, 00:20:04.804 "num_base_bdevs_discovered": 3, 00:20:04.804 "num_base_bdevs_operational": 3, 00:20:04.804 "base_bdevs_list": [ 00:20:04.804 { 00:20:04.804 "name": null, 00:20:04.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.804 "is_configured": false, 00:20:04.804 "data_offset": 0, 00:20:04.804 "data_size": 65536 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "name": "BaseBdev2", 00:20:04.804 "uuid": "f1373107-63aa-5c78-b941-961cdd2f0b94", 00:20:04.804 "is_configured": true, 00:20:04.804 "data_offset": 0, 00:20:04.804 "data_size": 65536 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "name": "BaseBdev3", 00:20:04.804 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:04.804 "is_configured": true, 00:20:04.804 "data_offset": 0, 00:20:04.804 "data_size": 65536 00:20:04.804 }, 00:20:04.804 { 00:20:04.804 "name": "BaseBdev4", 00:20:04.804 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:04.804 "is_configured": true, 00:20:04.804 "data_offset": 0, 00:20:04.804 "data_size": 65536 00:20:04.804 } 00:20:04.804 ] 00:20:04.804 }' 00:20:04.804 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.804 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:04.804 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.804 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:04.804 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:04.804 09:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.804 09:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:04.804 [2024-11-06 09:12:41.222893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:04.804 09:12:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.804 09:12:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:04.804 [2024-11-06 09:12:41.298497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:04.804 [2024-11-06 09:12:41.301274] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:05.063 [2024-11-06 09:12:41.448421] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:05.063 [2024-11-06 09:12:41.449111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:05.351 [2024-11-06 09:12:41.653478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:05.351 [2024-11-06 09:12:41.654541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:05.659 140.33 IOPS, 421.00 MiB/s [2024-11-06T09:12:42.194Z] [2024-11-06 09:12:42.139448] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:05.919 "name": "raid_bdev1", 00:20:05.919 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:05.919 "strip_size_kb": 0, 00:20:05.919 "state": "online", 00:20:05.919 "raid_level": "raid1", 00:20:05.919 "superblock": false, 00:20:05.919 "num_base_bdevs": 4, 00:20:05.919 "num_base_bdevs_discovered": 4, 00:20:05.919 "num_base_bdevs_operational": 4, 00:20:05.919 "process": { 00:20:05.919 "type": "rebuild", 00:20:05.919 "target": "spare", 00:20:05.919 "progress": { 00:20:05.919 "blocks": 10240, 00:20:05.919 "percent": 15 00:20:05.919 } 00:20:05.919 }, 00:20:05.919 "base_bdevs_list": [ 00:20:05.919 { 00:20:05.919 "name": "spare", 00:20:05.919 "uuid": "ce6d6efe-896d-548e-9e91-7c437449b662", 00:20:05.919 "is_configured": true, 00:20:05.919 "data_offset": 0, 00:20:05.919 "data_size": 65536 00:20:05.919 }, 00:20:05.919 { 00:20:05.919 "name": "BaseBdev2", 00:20:05.919 "uuid": "f1373107-63aa-5c78-b941-961cdd2f0b94", 00:20:05.919 "is_configured": true, 00:20:05.919 "data_offset": 0, 00:20:05.919 "data_size": 65536 00:20:05.919 }, 00:20:05.919 { 00:20:05.919 "name": "BaseBdev3", 00:20:05.919 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:05.919 "is_configured": true, 00:20:05.919 "data_offset": 0, 00:20:05.919 "data_size": 65536 00:20:05.919 }, 00:20:05.919 { 00:20:05.919 "name": "BaseBdev4", 00:20:05.919 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:05.919 "is_configured": true, 00:20:05.919 "data_offset": 0, 00:20:05.919 "data_size": 65536 00:20:05.919 } 00:20:05.919 ] 00:20:05.919 }' 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:05.919 [2024-11-06 09:12:42.421701] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.919 09:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:05.919 [2024-11-06 09:12:42.433352] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:06.183 [2024-11-06 09:12:42.524570] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:06.183 [2024-11-06 09:12:42.525624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:06.183 [2024-11-06 09:12:42.635788] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:20:06.183 [2024-11-06 09:12:42.636024] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.183 "name": "raid_bdev1", 00:20:06.183 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:06.183 "strip_size_kb": 0, 00:20:06.183 "state": "online", 00:20:06.183 "raid_level": "raid1", 00:20:06.183 "superblock": false, 00:20:06.183 "num_base_bdevs": 4, 00:20:06.183 "num_base_bdevs_discovered": 3, 00:20:06.183 "num_base_bdevs_operational": 3, 00:20:06.183 "process": { 00:20:06.183 "type": "rebuild", 00:20:06.183 "target": "spare", 00:20:06.183 "progress": { 00:20:06.183 "blocks": 16384, 00:20:06.183 "percent": 25 00:20:06.183 } 00:20:06.183 }, 00:20:06.183 "base_bdevs_list": [ 00:20:06.183 { 00:20:06.183 "name": "spare", 00:20:06.183 "uuid": "ce6d6efe-896d-548e-9e91-7c437449b662", 00:20:06.183 "is_configured": true, 00:20:06.183 "data_offset": 0, 00:20:06.183 "data_size": 65536 00:20:06.183 }, 00:20:06.183 { 00:20:06.183 "name": null, 00:20:06.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.183 "is_configured": false, 00:20:06.183 "data_offset": 0, 00:20:06.183 "data_size": 65536 00:20:06.183 }, 00:20:06.183 { 00:20:06.183 "name": "BaseBdev3", 00:20:06.183 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:06.183 "is_configured": true, 00:20:06.183 "data_offset": 0, 00:20:06.183 "data_size": 65536 00:20:06.183 }, 00:20:06.183 { 00:20:06.183 "name": "BaseBdev4", 00:20:06.183 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:06.183 "is_configured": true, 00:20:06.183 "data_offset": 0, 00:20:06.183 "data_size": 65536 00:20:06.183 } 00:20:06.183 ] 00:20:06.183 }' 00:20:06.183 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.444 130.75 IOPS, 392.25 MiB/s [2024-11-06T09:12:42.979Z] 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=522 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.444 "name": "raid_bdev1", 00:20:06.444 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:06.444 "strip_size_kb": 0, 00:20:06.444 "state": "online", 00:20:06.444 "raid_level": "raid1", 00:20:06.444 "superblock": false, 00:20:06.444 "num_base_bdevs": 4, 00:20:06.444 "num_base_bdevs_discovered": 3, 00:20:06.444 "num_base_bdevs_operational": 3, 00:20:06.444 "process": { 00:20:06.444 "type": "rebuild", 00:20:06.444 "target": "spare", 00:20:06.444 "progress": { 00:20:06.444 "blocks": 18432, 00:20:06.444 "percent": 28 00:20:06.444 } 00:20:06.444 }, 00:20:06.444 "base_bdevs_list": [ 00:20:06.444 { 00:20:06.444 "name": "spare", 00:20:06.444 "uuid": "ce6d6efe-896d-548e-9e91-7c437449b662", 00:20:06.444 "is_configured": true, 00:20:06.444 "data_offset": 0, 00:20:06.444 "data_size": 65536 00:20:06.444 }, 00:20:06.444 { 00:20:06.444 "name": null, 00:20:06.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.444 "is_configured": false, 00:20:06.444 "data_offset": 0, 00:20:06.444 "data_size": 65536 00:20:06.444 }, 00:20:06.444 { 00:20:06.444 "name": "BaseBdev3", 00:20:06.444 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:06.444 "is_configured": true, 00:20:06.444 "data_offset": 0, 00:20:06.444 "data_size": 65536 00:20:06.444 }, 00:20:06.444 { 00:20:06.444 "name": "BaseBdev4", 00:20:06.444 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:06.444 "is_configured": true, 00:20:06.444 "data_offset": 0, 00:20:06.444 "data_size": 65536 00:20:06.444 } 00:20:06.444 ] 00:20:06.444 }' 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.444 [2024-11-06 09:12:42.898999] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.444 09:12:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:06.702 [2024-11-06 09:12:43.018577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:06.961 [2024-11-06 09:12:43.346861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:06.961 [2024-11-06 09:12:43.465460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:07.527 113.80 IOPS, 341.40 MiB/s [2024-11-06T09:12:44.062Z] [2024-11-06 09:12:43.791247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:07.527 09:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:07.527 09:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:07.527 09:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:07.527 09:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:07.527 09:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:07.527 09:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:07.527 09:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.527 09:12:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.527 09:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.527 09:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:07.527 09:12:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.527 09:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:07.527 "name": "raid_bdev1", 00:20:07.527 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:07.527 "strip_size_kb": 0, 00:20:07.527 "state": "online", 00:20:07.527 "raid_level": "raid1", 00:20:07.527 "superblock": false, 00:20:07.527 "num_base_bdevs": 4, 00:20:07.527 "num_base_bdevs_discovered": 3, 00:20:07.527 "num_base_bdevs_operational": 3, 00:20:07.527 "process": { 00:20:07.527 "type": "rebuild", 00:20:07.527 "target": "spare", 00:20:07.527 "progress": { 00:20:07.527 "blocks": 32768, 00:20:07.527 "percent": 50 00:20:07.527 } 00:20:07.527 }, 00:20:07.527 "base_bdevs_list": [ 00:20:07.527 { 00:20:07.527 "name": "spare", 00:20:07.527 "uuid": "ce6d6efe-896d-548e-9e91-7c437449b662", 00:20:07.527 "is_configured": true, 00:20:07.527 "data_offset": 0, 00:20:07.527 "data_size": 65536 00:20:07.527 }, 00:20:07.527 { 00:20:07.527 "name": null, 00:20:07.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.527 "is_configured": false, 00:20:07.527 "data_offset": 0, 00:20:07.527 "data_size": 65536 00:20:07.527 }, 00:20:07.527 { 00:20:07.527 "name": "BaseBdev3", 00:20:07.527 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:07.527 "is_configured": true, 00:20:07.527 "data_offset": 0, 00:20:07.527 "data_size": 65536 00:20:07.527 }, 00:20:07.527 { 00:20:07.527 "name": "BaseBdev4", 00:20:07.527 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:07.527 "is_configured": true, 00:20:07.527 "data_offset": 0, 00:20:07.527 "data_size": 65536 00:20:07.527 } 00:20:07.527 ] 00:20:07.527 }' 00:20:07.527 [2024-11-06 09:12:44.018571] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:20:07.527 09:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:07.785 09:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:07.785 09:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:07.785 09:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:07.785 09:12:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:08.043 [2024-11-06 09:12:44.482913] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:08.868 101.17 IOPS, 303.50 MiB/s [2024-11-06T09:12:45.403Z] 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:08.868 "name": "raid_bdev1", 00:20:08.868 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:08.868 "strip_size_kb": 0, 00:20:08.868 "state": "online", 00:20:08.868 "raid_level": "raid1", 00:20:08.868 "superblock": false, 00:20:08.868 "num_base_bdevs": 4, 00:20:08.868 "num_base_bdevs_discovered": 3, 00:20:08.868 "num_base_bdevs_operational": 3, 00:20:08.868 "process": { 00:20:08.868 "type": "rebuild", 00:20:08.868 "target": "spare", 00:20:08.868 "progress": { 00:20:08.868 "blocks": 51200, 00:20:08.868 "percent": 78 00:20:08.868 } 00:20:08.868 }, 00:20:08.868 "base_bdevs_list": [ 00:20:08.868 { 00:20:08.868 "name": "spare", 00:20:08.868 "uuid": "ce6d6efe-896d-548e-9e91-7c437449b662", 00:20:08.868 "is_configured": true, 00:20:08.868 "data_offset": 0, 00:20:08.868 "data_size": 65536 00:20:08.868 }, 00:20:08.868 { 00:20:08.868 "name": null, 00:20:08.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.868 "is_configured": false, 00:20:08.868 "data_offset": 0, 00:20:08.868 "data_size": 65536 00:20:08.868 }, 00:20:08.868 { 00:20:08.868 "name": "BaseBdev3", 00:20:08.868 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:08.868 "is_configured": true, 00:20:08.868 "data_offset": 0, 00:20:08.868 "data_size": 65536 00:20:08.868 }, 00:20:08.868 { 00:20:08.868 "name": "BaseBdev4", 00:20:08.868 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:08.868 "is_configured": true, 00:20:08.868 "data_offset": 0, 00:20:08.868 "data_size": 65536 00:20:08.868 } 00:20:08.868 ] 00:20:08.868 }' 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:08.868 09:12:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:09.435 91.86 IOPS, 275.57 MiB/s [2024-11-06T09:12:45.970Z] [2024-11-06 09:12:45.877157] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:09.693 [2024-11-06 09:12:45.977151] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:09.693 [2024-11-06 09:12:45.989145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:09.951 "name": "raid_bdev1", 00:20:09.951 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:09.951 "strip_size_kb": 0, 00:20:09.951 "state": "online", 00:20:09.951 "raid_level": "raid1", 00:20:09.951 "superblock": false, 00:20:09.951 "num_base_bdevs": 4, 00:20:09.951 "num_base_bdevs_discovered": 3, 00:20:09.951 "num_base_bdevs_operational": 3, 00:20:09.951 "base_bdevs_list": [ 00:20:09.951 { 00:20:09.951 "name": "spare", 00:20:09.951 "uuid": "ce6d6efe-896d-548e-9e91-7c437449b662", 00:20:09.951 "is_configured": true, 00:20:09.951 "data_offset": 0, 00:20:09.951 "data_size": 65536 00:20:09.951 }, 00:20:09.951 { 00:20:09.951 "name": null, 00:20:09.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.951 "is_configured": false, 00:20:09.951 "data_offset": 0, 00:20:09.951 "data_size": 65536 00:20:09.951 }, 00:20:09.951 { 00:20:09.951 "name": "BaseBdev3", 00:20:09.951 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:09.951 "is_configured": true, 00:20:09.951 "data_offset": 0, 00:20:09.951 "data_size": 65536 00:20:09.951 }, 00:20:09.951 { 00:20:09.951 "name": "BaseBdev4", 00:20:09.951 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:09.951 "is_configured": true, 00:20:09.951 "data_offset": 0, 00:20:09.951 "data_size": 65536 00:20:09.951 } 00:20:09.951 ] 00:20:09.951 }' 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:09.951 09:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:10.209 "name": "raid_bdev1", 00:20:10.209 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:10.209 "strip_size_kb": 0, 00:20:10.209 "state": "online", 00:20:10.209 "raid_level": "raid1", 00:20:10.209 "superblock": false, 00:20:10.209 "num_base_bdevs": 4, 00:20:10.209 "num_base_bdevs_discovered": 3, 00:20:10.209 "num_base_bdevs_operational": 3, 00:20:10.209 "base_bdevs_list": [ 00:20:10.209 { 00:20:10.209 "name": "spare", 00:20:10.209 "uuid": "ce6d6efe-896d-548e-9e91-7c437449b662", 00:20:10.209 "is_configured": true, 00:20:10.209 "data_offset": 0, 00:20:10.209 "data_size": 65536 00:20:10.209 }, 00:20:10.209 { 00:20:10.209 "name": null, 00:20:10.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.209 "is_configured": false, 00:20:10.209 "data_offset": 0, 00:20:10.209 "data_size": 65536 00:20:10.209 }, 00:20:10.209 { 00:20:10.209 "name": "BaseBdev3", 00:20:10.209 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:10.209 "is_configured": true, 00:20:10.209 "data_offset": 0, 00:20:10.209 "data_size": 65536 00:20:10.209 }, 00:20:10.209 { 00:20:10.209 "name": "BaseBdev4", 00:20:10.209 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:10.209 "is_configured": true, 00:20:10.209 "data_offset": 0, 00:20:10.209 "data_size": 65536 00:20:10.209 } 00:20:10.209 ] 00:20:10.209 }' 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.209 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.210 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.210 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.210 09:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.210 09:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.210 09:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.210 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.210 "name": "raid_bdev1", 00:20:10.210 "uuid": "1a2962bd-2542-4f47-9bb9-7b8a31edd938", 00:20:10.210 "strip_size_kb": 0, 00:20:10.210 "state": "online", 00:20:10.210 "raid_level": "raid1", 00:20:10.210 "superblock": false, 00:20:10.210 "num_base_bdevs": 4, 00:20:10.210 "num_base_bdevs_discovered": 3, 00:20:10.210 "num_base_bdevs_operational": 3, 00:20:10.210 "base_bdevs_list": [ 00:20:10.210 { 00:20:10.210 "name": "spare", 00:20:10.210 "uuid": "ce6d6efe-896d-548e-9e91-7c437449b662", 00:20:10.210 "is_configured": true, 00:20:10.210 "data_offset": 0, 00:20:10.210 "data_size": 65536 00:20:10.210 }, 00:20:10.210 { 00:20:10.210 "name": null, 00:20:10.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.210 "is_configured": false, 00:20:10.210 "data_offset": 0, 00:20:10.210 "data_size": 65536 00:20:10.210 }, 00:20:10.210 { 00:20:10.210 "name": "BaseBdev3", 00:20:10.210 "uuid": "2bcf5ee3-1298-52a3-9e5b-66efff8c6c5f", 00:20:10.210 "is_configured": true, 00:20:10.210 "data_offset": 0, 00:20:10.210 "data_size": 65536 00:20:10.210 }, 00:20:10.210 { 00:20:10.210 "name": "BaseBdev4", 00:20:10.210 "uuid": "a6634dc6-5beb-5227-bbf8-420f6fd9e048", 00:20:10.210 "is_configured": true, 00:20:10.210 "data_offset": 0, 00:20:10.210 "data_size": 65536 00:20:10.210 } 00:20:10.210 ] 00:20:10.210 }' 00:20:10.210 09:12:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.210 09:12:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.726 84.00 IOPS, 252.00 MiB/s [2024-11-06T09:12:47.261Z] 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:10.726 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.726 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.726 [2024-11-06 09:12:47.134958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:10.726 [2024-11-06 09:12:47.135004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:10.726 00:20:10.726 Latency(us) 00:20:10.726 [2024-11-06T09:12:47.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.726 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:10.726 raid_bdev1 : 8.46 81.41 244.22 0.00 0.00 16892.05 307.20 122969.37 00:20:10.726 [2024-11-06T09:12:47.261Z] =================================================================================================================== 00:20:10.726 [2024-11-06T09:12:47.261Z] Total : 81.41 244.22 0.00 0.00 16892.05 307.20 122969.37 00:20:10.726 [2024-11-06 09:12:47.239541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.726 [2024-11-06 09:12:47.239636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:10.726 [2024-11-06 09:12:47.239765] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:10.726 [2024-11-06 09:12:47.239791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:10.726 { 00:20:10.726 "results": [ 00:20:10.726 { 00:20:10.726 "job": "raid_bdev1", 00:20:10.726 "core_mask": "0x1", 00:20:10.726 "workload": "randrw", 00:20:10.726 "percentage": 50, 00:20:10.726 "status": "finished", 00:20:10.726 "queue_depth": 2, 00:20:10.726 "io_size": 3145728, 00:20:10.726 "runtime": 8.463746, 00:20:10.726 "iops": 81.40603463289186, 00:20:10.726 "mibps": 244.21810389867557, 00:20:10.726 "io_failed": 0, 00:20:10.726 "io_timeout": 0, 00:20:10.726 "avg_latency_us": 16892.049378545984, 00:20:10.726 "min_latency_us": 307.2, 00:20:10.726 "max_latency_us": 122969.36727272728 00:20:10.726 } 00:20:10.726 ], 00:20:10.726 "core_count": 1 00:20:10.726 } 00:20:10.726 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.726 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.726 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:10.726 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.726 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:10.726 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:10.984 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:11.242 /dev/nbd0 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:11.242 1+0 records in 00:20:11.242 1+0 records out 00:20:11.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357332 s, 11.5 MB/s 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:20:11.242 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:20:11.243 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:11.243 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:20:11.243 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:11.243 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:11.243 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:11.243 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:11.243 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:11.243 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:11.243 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:20:11.501 /dev/nbd1 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:11.501 1+0 records in 00:20:11.501 1+0 records out 00:20:11.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284406 s, 14.4 MB/s 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:11.501 09:12:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:11.758 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:11.758 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:11.758 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:11.758 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:11.758 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:11.758 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:11.758 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:12.015 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:12.015 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:12.015 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:12.015 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.015 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.015 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:12.016 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:20:12.274 /dev/nbd1 00:20:12.274 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:12.274 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:12.274 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:12.274 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:20:12.274 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:12.274 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:12.274 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:12.274 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:20:12.274 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:12.274 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:12.274 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:12.274 1+0 records in 00:20:12.274 1+0 records out 00:20:12.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300112 s, 13.6 MB/s 00:20:12.274 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.532 09:12:48 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.790 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78980 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 78980 ']' 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 78980 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:20:13.048 09:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:13.305 09:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78980 00:20:13.305 killing process with pid 78980 00:20:13.305 Received shutdown signal, test time was about 10.848090 seconds 00:20:13.305 00:20:13.305 Latency(us) 00:20:13.305 [2024-11-06T09:12:49.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.305 [2024-11-06T09:12:49.840Z] =================================================================================================================== 00:20:13.305 [2024-11-06T09:12:49.840Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:13.305 09:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:13.305 09:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:13.305 09:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78980' 00:20:13.305 09:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 78980 00:20:13.305 09:12:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 78980 00:20:13.305 [2024-11-06 09:12:49.603243] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:13.564 [2024-11-06 09:12:49.984142] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:14.949 ************************************ 00:20:14.949 END TEST raid_rebuild_test_io 00:20:14.949 ************************************ 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:14.949 00:20:14.949 real 0m14.511s 00:20:14.949 user 0m19.143s 00:20:14.949 sys 0m1.801s 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:20:14.949 09:12:51 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:20:14.949 09:12:51 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:20:14.949 09:12:51 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:14.949 09:12:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:14.949 ************************************ 00:20:14.949 START TEST raid_rebuild_test_sb_io 00:20:14.949 ************************************ 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79401 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79401 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 79401 ']' 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:14.949 09:12:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:14.949 [2024-11-06 09:12:51.248543] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:20:14.949 [2024-11-06 09:12:51.248709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79401 ] 00:20:14.949 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:14.949 Zero copy mechanism will not be used. 00:20:14.949 [2024-11-06 09:12:51.427034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.225 [2024-11-06 09:12:51.588934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.487 [2024-11-06 09:12:51.796250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:15.487 [2024-11-06 09:12:51.796333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 BaseBdev1_malloc 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 [2024-11-06 09:12:52.344304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:16.053 [2024-11-06 09:12:52.344388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.053 [2024-11-06 09:12:52.344422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:16.053 [2024-11-06 09:12:52.344441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.053 [2024-11-06 09:12:52.347261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.053 BaseBdev1 00:20:16.053 [2024-11-06 09:12:52.347449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 BaseBdev2_malloc 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 [2024-11-06 09:12:52.397008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:16.053 [2024-11-06 09:12:52.397232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.053 [2024-11-06 09:12:52.397305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:16.053 [2024-11-06 09:12:52.397421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.053 [2024-11-06 09:12:52.400403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.053 [2024-11-06 09:12:52.400565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:16.053 BaseBdev2 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 BaseBdev3_malloc 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 [2024-11-06 09:12:52.459613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:16.053 [2024-11-06 09:12:52.459852] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.053 [2024-11-06 09:12:52.460033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:16.053 [2024-11-06 09:12:52.460209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.053 [2024-11-06 09:12:52.463295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.053 BaseBdev3 00:20:16.053 [2024-11-06 09:12:52.463460] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 BaseBdev4_malloc 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 [2024-11-06 09:12:52.508238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:16.053 [2024-11-06 09:12:52.508442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.053 [2024-11-06 09:12:52.508515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:16.053 [2024-11-06 09:12:52.508643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.053 [2024-11-06 09:12:52.511490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.053 [2024-11-06 09:12:52.511676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:16.053 BaseBdev4 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 spare_malloc 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 spare_delay 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 [2024-11-06 09:12:52.564101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:16.053 [2024-11-06 09:12:52.564358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.053 [2024-11-06 09:12:52.564396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:16.053 [2024-11-06 09:12:52.564416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.053 [2024-11-06 09:12:52.567175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.053 [2024-11-06 09:12:52.567226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:16.053 spare 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.053 [2024-11-06 09:12:52.572161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.053 [2024-11-06 09:12:52.574661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:16.053 [2024-11-06 09:12:52.574903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:16.053 [2024-11-06 09:12:52.575106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:16.053 [2024-11-06 09:12:52.575459] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:16.053 [2024-11-06 09:12:52.575604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:16.053 [2024-11-06 09:12:52.576039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:16.053 [2024-11-06 09:12:52.576400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:16.053 [2024-11-06 09:12:52.576524] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:16.053 [2024-11-06 09:12:52.576948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.053 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.312 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.312 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.312 "name": "raid_bdev1", 00:20:16.312 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:16.312 "strip_size_kb": 0, 00:20:16.312 "state": "online", 00:20:16.312 "raid_level": "raid1", 00:20:16.312 "superblock": true, 00:20:16.312 "num_base_bdevs": 4, 00:20:16.312 "num_base_bdevs_discovered": 4, 00:20:16.312 "num_base_bdevs_operational": 4, 00:20:16.312 "base_bdevs_list": [ 00:20:16.312 { 00:20:16.312 "name": "BaseBdev1", 00:20:16.312 "uuid": "75050f74-a65b-5f77-af6a-250f41073b2b", 00:20:16.312 "is_configured": true, 00:20:16.312 "data_offset": 2048, 00:20:16.312 "data_size": 63488 00:20:16.312 }, 00:20:16.312 { 00:20:16.312 "name": "BaseBdev2", 00:20:16.312 "uuid": "ac4df726-2862-5bf5-a669-35c476c2156e", 00:20:16.312 "is_configured": true, 00:20:16.312 "data_offset": 2048, 00:20:16.312 "data_size": 63488 00:20:16.312 }, 00:20:16.312 { 00:20:16.312 "name": "BaseBdev3", 00:20:16.312 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:16.312 "is_configured": true, 00:20:16.312 "data_offset": 2048, 00:20:16.312 "data_size": 63488 00:20:16.312 }, 00:20:16.312 { 00:20:16.312 "name": "BaseBdev4", 00:20:16.312 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:16.312 "is_configured": true, 00:20:16.312 "data_offset": 2048, 00:20:16.312 "data_size": 63488 00:20:16.312 } 00:20:16.312 ] 00:20:16.312 }' 00:20:16.312 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.312 09:12:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.570 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:16.570 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.570 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:16.570 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.570 [2024-11-06 09:12:53.089466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.828 [2024-11-06 09:12:53.173039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:16.828 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.829 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.829 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.829 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.829 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.829 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.829 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.829 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.829 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.829 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.829 "name": "raid_bdev1", 00:20:16.829 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:16.829 "strip_size_kb": 0, 00:20:16.829 "state": "online", 00:20:16.829 "raid_level": "raid1", 00:20:16.829 "superblock": true, 00:20:16.829 "num_base_bdevs": 4, 00:20:16.829 "num_base_bdevs_discovered": 3, 00:20:16.829 "num_base_bdevs_operational": 3, 00:20:16.829 "base_bdevs_list": [ 00:20:16.829 { 00:20:16.829 "name": null, 00:20:16.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.829 "is_configured": false, 00:20:16.829 "data_offset": 0, 00:20:16.829 "data_size": 63488 00:20:16.829 }, 00:20:16.829 { 00:20:16.829 "name": "BaseBdev2", 00:20:16.829 "uuid": "ac4df726-2862-5bf5-a669-35c476c2156e", 00:20:16.829 "is_configured": true, 00:20:16.829 "data_offset": 2048, 00:20:16.829 "data_size": 63488 00:20:16.829 }, 00:20:16.829 { 00:20:16.829 "name": "BaseBdev3", 00:20:16.829 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:16.829 "is_configured": true, 00:20:16.829 "data_offset": 2048, 00:20:16.829 "data_size": 63488 00:20:16.829 }, 00:20:16.829 { 00:20:16.829 "name": "BaseBdev4", 00:20:16.829 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:16.829 "is_configured": true, 00:20:16.829 "data_offset": 2048, 00:20:16.829 "data_size": 63488 00:20:16.829 } 00:20:16.829 ] 00:20:16.829 }' 00:20:16.829 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.829 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:16.829 [2024-11-06 09:12:53.301145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:16.829 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:16.829 Zero copy mechanism will not be used. 00:20:16.829 Running I/O for 60 seconds... 00:20:17.404 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:17.404 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.404 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:17.404 [2024-11-06 09:12:53.681040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:17.404 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.404 09:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:17.404 [2024-11-06 09:12:53.786376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:20:17.404 [2024-11-06 09:12:53.789073] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:17.404 [2024-11-06 09:12:53.933592] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:17.662 [2024-11-06 09:12:54.171740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:17.662 [2024-11-06 09:12:54.172413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:18.180 165.00 IOPS, 495.00 MiB/s [2024-11-06T09:12:54.715Z] [2024-11-06 09:12:54.488229] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:18.438 [2024-11-06 09:12:54.714094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.438 "name": "raid_bdev1", 00:20:18.438 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:18.438 "strip_size_kb": 0, 00:20:18.438 "state": "online", 00:20:18.438 "raid_level": "raid1", 00:20:18.438 "superblock": true, 00:20:18.438 "num_base_bdevs": 4, 00:20:18.438 "num_base_bdevs_discovered": 4, 00:20:18.438 "num_base_bdevs_operational": 4, 00:20:18.438 "process": { 00:20:18.438 "type": "rebuild", 00:20:18.438 "target": "spare", 00:20:18.438 "progress": { 00:20:18.438 "blocks": 10240, 00:20:18.438 "percent": 16 00:20:18.438 } 00:20:18.438 }, 00:20:18.438 "base_bdevs_list": [ 00:20:18.438 { 00:20:18.438 "name": "spare", 00:20:18.438 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:18.438 "is_configured": true, 00:20:18.438 "data_offset": 2048, 00:20:18.438 "data_size": 63488 00:20:18.438 }, 00:20:18.438 { 00:20:18.438 "name": "BaseBdev2", 00:20:18.438 "uuid": "ac4df726-2862-5bf5-a669-35c476c2156e", 00:20:18.438 "is_configured": true, 00:20:18.438 "data_offset": 2048, 00:20:18.438 "data_size": 63488 00:20:18.438 }, 00:20:18.438 { 00:20:18.438 "name": "BaseBdev3", 00:20:18.438 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:18.438 "is_configured": true, 00:20:18.438 "data_offset": 2048, 00:20:18.438 "data_size": 63488 00:20:18.438 }, 00:20:18.438 { 00:20:18.438 "name": "BaseBdev4", 00:20:18.438 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:18.438 "is_configured": true, 00:20:18.438 "data_offset": 2048, 00:20:18.438 "data_size": 63488 00:20:18.438 } 00:20:18.438 ] 00:20:18.438 }' 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.438 09:12:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.438 [2024-11-06 09:12:54.916911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:18.696 [2024-11-06 09:12:54.989974] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:18.696 [2024-11-06 09:12:55.012005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.696 [2024-11-06 09:12:55.012095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:18.696 [2024-11-06 09:12:55.012115] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:18.696 [2024-11-06 09:12:55.028095] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.696 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.696 "name": "raid_bdev1", 00:20:18.696 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:18.696 "strip_size_kb": 0, 00:20:18.696 "state": "online", 00:20:18.696 "raid_level": "raid1", 00:20:18.696 "superblock": true, 00:20:18.696 "num_base_bdevs": 4, 00:20:18.696 "num_base_bdevs_discovered": 3, 00:20:18.696 "num_base_bdevs_operational": 3, 00:20:18.696 "base_bdevs_list": [ 00:20:18.696 { 00:20:18.696 "name": null, 00:20:18.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.696 "is_configured": false, 00:20:18.696 "data_offset": 0, 00:20:18.696 "data_size": 63488 00:20:18.696 }, 00:20:18.696 { 00:20:18.696 "name": "BaseBdev2", 00:20:18.696 "uuid": "ac4df726-2862-5bf5-a669-35c476c2156e", 00:20:18.696 "is_configured": true, 00:20:18.696 "data_offset": 2048, 00:20:18.696 "data_size": 63488 00:20:18.696 }, 00:20:18.697 { 00:20:18.697 "name": "BaseBdev3", 00:20:18.697 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:18.697 "is_configured": true, 00:20:18.697 "data_offset": 2048, 00:20:18.697 "data_size": 63488 00:20:18.697 }, 00:20:18.697 { 00:20:18.697 "name": "BaseBdev4", 00:20:18.697 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:18.697 "is_configured": true, 00:20:18.697 "data_offset": 2048, 00:20:18.697 "data_size": 63488 00:20:18.697 } 00:20:18.697 ] 00:20:18.697 }' 00:20:18.697 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.697 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.213 140.00 IOPS, 420.00 MiB/s [2024-11-06T09:12:55.748Z] 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.213 "name": "raid_bdev1", 00:20:19.213 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:19.213 "strip_size_kb": 0, 00:20:19.213 "state": "online", 00:20:19.213 "raid_level": "raid1", 00:20:19.213 "superblock": true, 00:20:19.213 "num_base_bdevs": 4, 00:20:19.213 "num_base_bdevs_discovered": 3, 00:20:19.213 "num_base_bdevs_operational": 3, 00:20:19.213 "base_bdevs_list": [ 00:20:19.213 { 00:20:19.213 "name": null, 00:20:19.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:19.213 "is_configured": false, 00:20:19.213 "data_offset": 0, 00:20:19.213 "data_size": 63488 00:20:19.213 }, 00:20:19.213 { 00:20:19.213 "name": "BaseBdev2", 00:20:19.213 "uuid": "ac4df726-2862-5bf5-a669-35c476c2156e", 00:20:19.213 "is_configured": true, 00:20:19.213 "data_offset": 2048, 00:20:19.213 "data_size": 63488 00:20:19.213 }, 00:20:19.213 { 00:20:19.213 "name": "BaseBdev3", 00:20:19.213 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:19.213 "is_configured": true, 00:20:19.213 "data_offset": 2048, 00:20:19.213 "data_size": 63488 00:20:19.213 }, 00:20:19.213 { 00:20:19.213 "name": "BaseBdev4", 00:20:19.213 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:19.213 "is_configured": true, 00:20:19.213 "data_offset": 2048, 00:20:19.213 "data_size": 63488 00:20:19.213 } 00:20:19.213 ] 00:20:19.213 }' 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.213 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:19.213 [2024-11-06 09:12:55.728478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.471 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.471 09:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:19.471 [2024-11-06 09:12:55.814456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:19.471 [2024-11-06 09:12:55.817118] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:19.471 [2024-11-06 09:12:55.957312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:19.729 [2024-11-06 09:12:56.070338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:19.729 [2024-11-06 09:12:56.070916] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:19.987 145.67 IOPS, 437.00 MiB/s [2024-11-06T09:12:56.522Z] [2024-11-06 09:12:56.329149] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:19.987 [2024-11-06 09:12:56.329723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:20.245 [2024-11-06 09:12:56.553485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:20.245 [2024-11-06 09:12:56.554612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:20.503 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.503 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.503 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.503 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.503 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.503 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.503 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.503 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.503 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:20.503 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.503 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.503 "name": "raid_bdev1", 00:20:20.503 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:20.503 "strip_size_kb": 0, 00:20:20.503 "state": "online", 00:20:20.503 "raid_level": "raid1", 00:20:20.503 "superblock": true, 00:20:20.503 "num_base_bdevs": 4, 00:20:20.503 "num_base_bdevs_discovered": 4, 00:20:20.503 "num_base_bdevs_operational": 4, 00:20:20.503 "process": { 00:20:20.503 "type": "rebuild", 00:20:20.503 "target": "spare", 00:20:20.503 "progress": { 00:20:20.503 "blocks": 12288, 00:20:20.503 "percent": 19 00:20:20.503 } 00:20:20.503 }, 00:20:20.503 "base_bdevs_list": [ 00:20:20.503 { 00:20:20.503 "name": "spare", 00:20:20.503 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:20.503 "is_configured": true, 00:20:20.503 "data_offset": 2048, 00:20:20.503 "data_size": 63488 00:20:20.503 }, 00:20:20.503 { 00:20:20.503 "name": "BaseBdev2", 00:20:20.503 "uuid": "ac4df726-2862-5bf5-a669-35c476c2156e", 00:20:20.503 "is_configured": true, 00:20:20.503 "data_offset": 2048, 00:20:20.503 "data_size": 63488 00:20:20.503 }, 00:20:20.503 { 00:20:20.503 "name": "BaseBdev3", 00:20:20.503 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:20.503 "is_configured": true, 00:20:20.503 "data_offset": 2048, 00:20:20.503 "data_size": 63488 00:20:20.503 }, 00:20:20.503 { 00:20:20.504 "name": "BaseBdev4", 00:20:20.504 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:20.504 "is_configured": true, 00:20:20.504 "data_offset": 2048, 00:20:20.504 "data_size": 63488 00:20:20.504 } 00:20:20.504 ] 00:20:20.504 }' 00:20:20.504 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.504 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.504 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.504 [2024-11-06 09:12:56.900955] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:20.504 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.504 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:20.504 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:20.504 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:20.504 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:20.504 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:20.504 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:20:20.504 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:20.504 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.504 09:12:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:20.504 [2024-11-06 09:12:56.962698] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:20.504 [2024-11-06 09:12:57.014826] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:20.762 [2024-11-06 09:12:57.170952] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:20:20.762 [2024-11-06 09:12:57.171187] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.762 [2024-11-06 09:12:57.192089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:20.762 "name": "raid_bdev1", 00:20:20.762 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:20.762 "strip_size_kb": 0, 00:20:20.762 "state": "online", 00:20:20.762 "raid_level": "raid1", 00:20:20.762 "superblock": true, 00:20:20.762 "num_base_bdevs": 4, 00:20:20.762 "num_base_bdevs_discovered": 3, 00:20:20.762 "num_base_bdevs_operational": 3, 00:20:20.762 "process": { 00:20:20.762 "type": "rebuild", 00:20:20.762 "target": "spare", 00:20:20.762 "progress": { 00:20:20.762 "blocks": 16384, 00:20:20.762 "percent": 25 00:20:20.762 } 00:20:20.762 }, 00:20:20.762 "base_bdevs_list": [ 00:20:20.762 { 00:20:20.762 "name": "spare", 00:20:20.762 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:20.762 "is_configured": true, 00:20:20.762 "data_offset": 2048, 00:20:20.762 "data_size": 63488 00:20:20.762 }, 00:20:20.762 { 00:20:20.762 "name": null, 00:20:20.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.762 "is_configured": false, 00:20:20.762 "data_offset": 0, 00:20:20.762 "data_size": 63488 00:20:20.762 }, 00:20:20.762 { 00:20:20.762 "name": "BaseBdev3", 00:20:20.762 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:20.762 "is_configured": true, 00:20:20.762 "data_offset": 2048, 00:20:20.762 "data_size": 63488 00:20:20.762 }, 00:20:20.762 { 00:20:20.762 "name": "BaseBdev4", 00:20:20.762 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:20.762 "is_configured": true, 00:20:20.762 "data_offset": 2048, 00:20:20.762 "data_size": 63488 00:20:20.762 } 00:20:20.762 ] 00:20:20.762 }' 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:20.762 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.021 132.50 IOPS, 397.50 MiB/s [2024-11-06T09:12:57.556Z] 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=537 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.021 "name": "raid_bdev1", 00:20:21.021 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:21.021 "strip_size_kb": 0, 00:20:21.021 "state": "online", 00:20:21.021 "raid_level": "raid1", 00:20:21.021 "superblock": true, 00:20:21.021 "num_base_bdevs": 4, 00:20:21.021 "num_base_bdevs_discovered": 3, 00:20:21.021 "num_base_bdevs_operational": 3, 00:20:21.021 "process": { 00:20:21.021 "type": "rebuild", 00:20:21.021 "target": "spare", 00:20:21.021 "progress": { 00:20:21.021 "blocks": 18432, 00:20:21.021 "percent": 29 00:20:21.021 } 00:20:21.021 }, 00:20:21.021 "base_bdevs_list": [ 00:20:21.021 { 00:20:21.021 "name": "spare", 00:20:21.021 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:21.021 "is_configured": true, 00:20:21.021 "data_offset": 2048, 00:20:21.021 "data_size": 63488 00:20:21.021 }, 00:20:21.021 { 00:20:21.021 "name": null, 00:20:21.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.021 "is_configured": false, 00:20:21.021 "data_offset": 0, 00:20:21.021 "data_size": 63488 00:20:21.021 }, 00:20:21.021 { 00:20:21.021 "name": "BaseBdev3", 00:20:21.021 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:21.021 "is_configured": true, 00:20:21.021 "data_offset": 2048, 00:20:21.021 "data_size": 63488 00:20:21.021 }, 00:20:21.021 { 00:20:21.021 "name": "BaseBdev4", 00:20:21.021 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:21.021 "is_configured": true, 00:20:21.021 "data_offset": 2048, 00:20:21.021 "data_size": 63488 00:20:21.021 } 00:20:21.021 ] 00:20:21.021 }' 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.021 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.022 09:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:21.022 [2024-11-06 09:12:57.553430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:21.588 [2024-11-06 09:12:57.909982] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:21.848 [2024-11-06 09:12:58.132462] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:22.107 115.60 IOPS, 346.80 MiB/s [2024-11-06T09:12:58.642Z] 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:22.107 "name": "raid_bdev1", 00:20:22.107 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:22.107 "strip_size_kb": 0, 00:20:22.107 "state": "online", 00:20:22.107 "raid_level": "raid1", 00:20:22.107 "superblock": true, 00:20:22.107 "num_base_bdevs": 4, 00:20:22.107 "num_base_bdevs_discovered": 3, 00:20:22.107 "num_base_bdevs_operational": 3, 00:20:22.107 "process": { 00:20:22.107 "type": "rebuild", 00:20:22.107 "target": "spare", 00:20:22.107 "progress": { 00:20:22.107 "blocks": 34816, 00:20:22.107 "percent": 54 00:20:22.107 } 00:20:22.107 }, 00:20:22.107 "base_bdevs_list": [ 00:20:22.107 { 00:20:22.107 "name": "spare", 00:20:22.107 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:22.107 "is_configured": true, 00:20:22.107 "data_offset": 2048, 00:20:22.107 "data_size": 63488 00:20:22.107 }, 00:20:22.107 { 00:20:22.107 "name": null, 00:20:22.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.107 "is_configured": false, 00:20:22.107 "data_offset": 0, 00:20:22.107 "data_size": 63488 00:20:22.107 }, 00:20:22.107 { 00:20:22.107 "name": "BaseBdev3", 00:20:22.107 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:22.107 "is_configured": true, 00:20:22.107 "data_offset": 2048, 00:20:22.107 "data_size": 63488 00:20:22.107 }, 00:20:22.107 { 00:20:22.107 "name": "BaseBdev4", 00:20:22.107 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:22.107 "is_configured": true, 00:20:22.107 "data_offset": 2048, 00:20:22.107 "data_size": 63488 00:20:22.107 } 00:20:22.107 ] 00:20:22.107 }' 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:22.107 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:22.366 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:22.366 09:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:22.366 [2024-11-06 09:12:58.695037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:22.366 [2024-11-06 09:12:58.695892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:23.191 103.50 IOPS, 310.50 MiB/s [2024-11-06T09:12:59.726Z] 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:23.191 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.191 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.191 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.191 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.191 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.191 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.191 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.191 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.191 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:23.191 [2024-11-06 09:12:59.656285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:23.191 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.191 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.191 "name": "raid_bdev1", 00:20:23.191 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:23.191 "strip_size_kb": 0, 00:20:23.191 "state": "online", 00:20:23.191 "raid_level": "raid1", 00:20:23.191 "superblock": true, 00:20:23.191 "num_base_bdevs": 4, 00:20:23.191 "num_base_bdevs_discovered": 3, 00:20:23.191 "num_base_bdevs_operational": 3, 00:20:23.191 "process": { 00:20:23.191 "type": "rebuild", 00:20:23.191 "target": "spare", 00:20:23.191 "progress": { 00:20:23.191 "blocks": 51200, 00:20:23.191 "percent": 80 00:20:23.191 } 00:20:23.191 }, 00:20:23.191 "base_bdevs_list": [ 00:20:23.191 { 00:20:23.191 "name": "spare", 00:20:23.192 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:23.192 "is_configured": true, 00:20:23.192 "data_offset": 2048, 00:20:23.192 "data_size": 63488 00:20:23.192 }, 00:20:23.192 { 00:20:23.192 "name": null, 00:20:23.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.192 "is_configured": false, 00:20:23.192 "data_offset": 0, 00:20:23.192 "data_size": 63488 00:20:23.192 }, 00:20:23.192 { 00:20:23.192 "name": "BaseBdev3", 00:20:23.192 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:23.192 "is_configured": true, 00:20:23.192 "data_offset": 2048, 00:20:23.192 "data_size": 63488 00:20:23.192 }, 00:20:23.192 { 00:20:23.192 "name": "BaseBdev4", 00:20:23.192 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:23.192 "is_configured": true, 00:20:23.192 "data_offset": 2048, 00:20:23.192 "data_size": 63488 00:20:23.192 } 00:20:23.192 ] 00:20:23.192 }' 00:20:23.192 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.450 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:23.450 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.450 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.450 09:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:23.709 [2024-11-06 09:12:59.996793] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:20:23.967 92.43 IOPS, 277.29 MiB/s [2024-11-06T09:13:00.502Z] [2024-11-06 09:13:00.344166] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:23.967 [2024-11-06 09:13:00.444310] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:23.967 [2024-11-06 09:13:00.447542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.534 "name": "raid_bdev1", 00:20:24.534 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:24.534 "strip_size_kb": 0, 00:20:24.534 "state": "online", 00:20:24.534 "raid_level": "raid1", 00:20:24.534 "superblock": true, 00:20:24.534 "num_base_bdevs": 4, 00:20:24.534 "num_base_bdevs_discovered": 3, 00:20:24.534 "num_base_bdevs_operational": 3, 00:20:24.534 "base_bdevs_list": [ 00:20:24.534 { 00:20:24.534 "name": "spare", 00:20:24.534 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:24.534 "is_configured": true, 00:20:24.534 "data_offset": 2048, 00:20:24.534 "data_size": 63488 00:20:24.534 }, 00:20:24.534 { 00:20:24.534 "name": null, 00:20:24.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.534 "is_configured": false, 00:20:24.534 "data_offset": 0, 00:20:24.534 "data_size": 63488 00:20:24.534 }, 00:20:24.534 { 00:20:24.534 "name": "BaseBdev3", 00:20:24.534 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:24.534 "is_configured": true, 00:20:24.534 "data_offset": 2048, 00:20:24.534 "data_size": 63488 00:20:24.534 }, 00:20:24.534 { 00:20:24.534 "name": "BaseBdev4", 00:20:24.534 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:24.534 "is_configured": true, 00:20:24.534 "data_offset": 2048, 00:20:24.534 "data_size": 63488 00:20:24.534 } 00:20:24.534 ] 00:20:24.534 }' 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.534 09:13:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.534 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.534 "name": "raid_bdev1", 00:20:24.534 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:24.534 "strip_size_kb": 0, 00:20:24.534 "state": "online", 00:20:24.534 "raid_level": "raid1", 00:20:24.534 "superblock": true, 00:20:24.534 "num_base_bdevs": 4, 00:20:24.534 "num_base_bdevs_discovered": 3, 00:20:24.534 "num_base_bdevs_operational": 3, 00:20:24.534 "base_bdevs_list": [ 00:20:24.534 { 00:20:24.534 "name": "spare", 00:20:24.534 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:24.534 "is_configured": true, 00:20:24.534 "data_offset": 2048, 00:20:24.534 "data_size": 63488 00:20:24.534 }, 00:20:24.534 { 00:20:24.534 "name": null, 00:20:24.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.534 "is_configured": false, 00:20:24.534 "data_offset": 0, 00:20:24.534 "data_size": 63488 00:20:24.534 }, 00:20:24.534 { 00:20:24.534 "name": "BaseBdev3", 00:20:24.534 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:24.534 "is_configured": true, 00:20:24.534 "data_offset": 2048, 00:20:24.534 "data_size": 63488 00:20:24.534 }, 00:20:24.534 { 00:20:24.534 "name": "BaseBdev4", 00:20:24.534 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:24.534 "is_configured": true, 00:20:24.534 "data_offset": 2048, 00:20:24.534 "data_size": 63488 00:20:24.534 } 00:20:24.534 ] 00:20:24.534 }' 00:20:24.534 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.793 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.793 "name": "raid_bdev1", 00:20:24.793 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:24.793 "strip_size_kb": 0, 00:20:24.794 "state": "online", 00:20:24.794 "raid_level": "raid1", 00:20:24.794 "superblock": true, 00:20:24.794 "num_base_bdevs": 4, 00:20:24.794 "num_base_bdevs_discovered": 3, 00:20:24.794 "num_base_bdevs_operational": 3, 00:20:24.794 "base_bdevs_list": [ 00:20:24.794 { 00:20:24.794 "name": "spare", 00:20:24.794 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:24.794 "is_configured": true, 00:20:24.794 "data_offset": 2048, 00:20:24.794 "data_size": 63488 00:20:24.794 }, 00:20:24.794 { 00:20:24.794 "name": null, 00:20:24.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.794 "is_configured": false, 00:20:24.794 "data_offset": 0, 00:20:24.794 "data_size": 63488 00:20:24.794 }, 00:20:24.794 { 00:20:24.794 "name": "BaseBdev3", 00:20:24.794 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:24.794 "is_configured": true, 00:20:24.794 "data_offset": 2048, 00:20:24.794 "data_size": 63488 00:20:24.794 }, 00:20:24.794 { 00:20:24.794 "name": "BaseBdev4", 00:20:24.794 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:24.794 "is_configured": true, 00:20:24.794 "data_offset": 2048, 00:20:24.794 "data_size": 63488 00:20:24.794 } 00:20:24.794 ] 00:20:24.794 }' 00:20:24.794 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.794 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:25.313 86.62 IOPS, 259.88 MiB/s [2024-11-06T09:13:01.848Z] 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:25.313 [2024-11-06 09:13:01.650904] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:25.313 [2024-11-06 09:13:01.650953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:25.313 00:20:25.313 Latency(us) 00:20:25.313 [2024-11-06T09:13:01.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.313 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:25.313 raid_bdev1 : 8.38 83.66 250.99 0.00 0.00 16490.56 296.03 118203.11 00:20:25.313 [2024-11-06T09:13:01.848Z] =================================================================================================================== 00:20:25.313 [2024-11-06T09:13:01.848Z] Total : 83.66 250.99 0.00 0.00 16490.56 296.03 118203.11 00:20:25.313 [2024-11-06 09:13:01.703565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.313 { 00:20:25.313 "results": [ 00:20:25.313 { 00:20:25.313 "job": "raid_bdev1", 00:20:25.313 "core_mask": "0x1", 00:20:25.313 "workload": "randrw", 00:20:25.313 "percentage": 50, 00:20:25.313 "status": "finished", 00:20:25.313 "queue_depth": 2, 00:20:25.313 "io_size": 3145728, 00:20:25.313 "runtime": 8.378884, 00:20:25.313 "iops": 83.66269302689952, 00:20:25.313 "mibps": 250.98807908069855, 00:20:25.313 "io_failed": 0, 00:20:25.313 "io_timeout": 0, 00:20:25.313 "avg_latency_us": 16490.556555569965, 00:20:25.313 "min_latency_us": 296.0290909090909, 00:20:25.313 "max_latency_us": 118203.11272727273 00:20:25.313 } 00:20:25.313 ], 00:20:25.313 "core_count": 1 00:20:25.313 } 00:20:25.313 [2024-11-06 09:13:01.703812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:25.313 [2024-11-06 09:13:01.703996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:25.313 [2024-11-06 09:13:01.704022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:25.313 09:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:20:25.572 /dev/nbd0 00:20:25.572 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:25.572 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:25.572 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:20:25.572 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:20:25.572 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:25.572 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:25.572 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:20:25.572 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:20:25.572 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:25.572 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:25.572 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:25.831 1+0 records in 00:20:25.831 1+0 records out 00:20:25.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054788 s, 7.5 MB/s 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:25.831 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:20:26.126 /dev/nbd1 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:26.126 1+0 records in 00:20:26.126 1+0 records out 00:20:26.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066618 s, 6.1 MB/s 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:26.126 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:26.692 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:26.693 09:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:20:26.951 /dev/nbd1 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:26.951 1+0 records in 00:20:26.951 1+0 records out 00:20:26.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416097 s, 9.8 MB/s 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:26.951 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:27.518 09:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:27.776 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:27.776 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:27.776 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:27.776 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:27.776 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:27.776 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.777 [2024-11-06 09:13:04.085180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:27.777 [2024-11-06 09:13:04.085274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.777 [2024-11-06 09:13:04.085306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:27.777 [2024-11-06 09:13:04.085325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.777 [2024-11-06 09:13:04.088307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.777 spare 00:20:27.777 [2024-11-06 09:13:04.088505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:27.777 [2024-11-06 09:13:04.088643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:27.777 [2024-11-06 09:13:04.088720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:27.777 [2024-11-06 09:13:04.088937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:27.777 [2024-11-06 09:13:04.089080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.777 [2024-11-06 09:13:04.189266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:27.777 [2024-11-06 09:13:04.189340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:27.777 [2024-11-06 09:13:04.189836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:20:27.777 [2024-11-06 09:13:04.190109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:27.777 [2024-11-06 09:13:04.190126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:27.777 [2024-11-06 09:13:04.190362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.777 "name": "raid_bdev1", 00:20:27.777 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:27.777 "strip_size_kb": 0, 00:20:27.777 "state": "online", 00:20:27.777 "raid_level": "raid1", 00:20:27.777 "superblock": true, 00:20:27.777 "num_base_bdevs": 4, 00:20:27.777 "num_base_bdevs_discovered": 3, 00:20:27.777 "num_base_bdevs_operational": 3, 00:20:27.777 "base_bdevs_list": [ 00:20:27.777 { 00:20:27.777 "name": "spare", 00:20:27.777 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:27.777 "is_configured": true, 00:20:27.777 "data_offset": 2048, 00:20:27.777 "data_size": 63488 00:20:27.777 }, 00:20:27.777 { 00:20:27.777 "name": null, 00:20:27.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.777 "is_configured": false, 00:20:27.777 "data_offset": 2048, 00:20:27.777 "data_size": 63488 00:20:27.777 }, 00:20:27.777 { 00:20:27.777 "name": "BaseBdev3", 00:20:27.777 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:27.777 "is_configured": true, 00:20:27.777 "data_offset": 2048, 00:20:27.777 "data_size": 63488 00:20:27.777 }, 00:20:27.777 { 00:20:27.777 "name": "BaseBdev4", 00:20:27.777 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:27.777 "is_configured": true, 00:20:27.777 "data_offset": 2048, 00:20:27.777 "data_size": 63488 00:20:27.777 } 00:20:27.777 ] 00:20:27.777 }' 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.777 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.343 "name": "raid_bdev1", 00:20:28.343 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:28.343 "strip_size_kb": 0, 00:20:28.343 "state": "online", 00:20:28.343 "raid_level": "raid1", 00:20:28.343 "superblock": true, 00:20:28.343 "num_base_bdevs": 4, 00:20:28.343 "num_base_bdevs_discovered": 3, 00:20:28.343 "num_base_bdevs_operational": 3, 00:20:28.343 "base_bdevs_list": [ 00:20:28.343 { 00:20:28.343 "name": "spare", 00:20:28.343 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:28.343 "is_configured": true, 00:20:28.343 "data_offset": 2048, 00:20:28.343 "data_size": 63488 00:20:28.343 }, 00:20:28.343 { 00:20:28.343 "name": null, 00:20:28.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.343 "is_configured": false, 00:20:28.343 "data_offset": 2048, 00:20:28.343 "data_size": 63488 00:20:28.343 }, 00:20:28.343 { 00:20:28.343 "name": "BaseBdev3", 00:20:28.343 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:28.343 "is_configured": true, 00:20:28.343 "data_offset": 2048, 00:20:28.343 "data_size": 63488 00:20:28.343 }, 00:20:28.343 { 00:20:28.343 "name": "BaseBdev4", 00:20:28.343 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:28.343 "is_configured": true, 00:20:28.343 "data_offset": 2048, 00:20:28.343 "data_size": 63488 00:20:28.343 } 00:20:28.343 ] 00:20:28.343 }' 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:28.343 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.601 [2024-11-06 09:13:04.905613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.601 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.601 "name": "raid_bdev1", 00:20:28.601 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:28.601 "strip_size_kb": 0, 00:20:28.601 "state": "online", 00:20:28.601 "raid_level": "raid1", 00:20:28.601 "superblock": true, 00:20:28.601 "num_base_bdevs": 4, 00:20:28.601 "num_base_bdevs_discovered": 2, 00:20:28.601 "num_base_bdevs_operational": 2, 00:20:28.601 "base_bdevs_list": [ 00:20:28.601 { 00:20:28.601 "name": null, 00:20:28.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.601 "is_configured": false, 00:20:28.601 "data_offset": 0, 00:20:28.601 "data_size": 63488 00:20:28.601 }, 00:20:28.601 { 00:20:28.601 "name": null, 00:20:28.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.601 "is_configured": false, 00:20:28.601 "data_offset": 2048, 00:20:28.601 "data_size": 63488 00:20:28.601 }, 00:20:28.601 { 00:20:28.601 "name": "BaseBdev3", 00:20:28.602 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:28.602 "is_configured": true, 00:20:28.602 "data_offset": 2048, 00:20:28.602 "data_size": 63488 00:20:28.602 }, 00:20:28.602 { 00:20:28.602 "name": "BaseBdev4", 00:20:28.602 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:28.602 "is_configured": true, 00:20:28.602 "data_offset": 2048, 00:20:28.602 "data_size": 63488 00:20:28.602 } 00:20:28.602 ] 00:20:28.602 }' 00:20:28.602 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.602 09:13:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:28.860 09:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:28.860 09:13:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.860 09:13:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:29.118 [2024-11-06 09:13:05.397856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.118 [2024-11-06 09:13:05.398097] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:29.118 [2024-11-06 09:13:05.398135] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:29.118 [2024-11-06 09:13:05.398189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:29.118 [2024-11-06 09:13:05.412968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:20:29.118 09:13:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.118 09:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:29.118 [2024-11-06 09:13:05.415576] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:30.053 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:30.053 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:30.053 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:30.053 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:30.053 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:30.053 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.053 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.053 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:30.053 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.053 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.054 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:30.054 "name": "raid_bdev1", 00:20:30.054 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:30.054 "strip_size_kb": 0, 00:20:30.054 "state": "online", 00:20:30.054 "raid_level": "raid1", 00:20:30.054 "superblock": true, 00:20:30.054 "num_base_bdevs": 4, 00:20:30.054 "num_base_bdevs_discovered": 3, 00:20:30.054 "num_base_bdevs_operational": 3, 00:20:30.054 "process": { 00:20:30.054 "type": "rebuild", 00:20:30.054 "target": "spare", 00:20:30.054 "progress": { 00:20:30.054 "blocks": 20480, 00:20:30.054 "percent": 32 00:20:30.054 } 00:20:30.054 }, 00:20:30.054 "base_bdevs_list": [ 00:20:30.054 { 00:20:30.054 "name": "spare", 00:20:30.054 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:30.054 "is_configured": true, 00:20:30.054 "data_offset": 2048, 00:20:30.054 "data_size": 63488 00:20:30.054 }, 00:20:30.054 { 00:20:30.054 "name": null, 00:20:30.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.054 "is_configured": false, 00:20:30.054 "data_offset": 2048, 00:20:30.054 "data_size": 63488 00:20:30.054 }, 00:20:30.054 { 00:20:30.054 "name": "BaseBdev3", 00:20:30.054 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:30.054 "is_configured": true, 00:20:30.054 "data_offset": 2048, 00:20:30.054 "data_size": 63488 00:20:30.054 }, 00:20:30.054 { 00:20:30.054 "name": "BaseBdev4", 00:20:30.054 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:30.054 "is_configured": true, 00:20:30.054 "data_offset": 2048, 00:20:30.054 "data_size": 63488 00:20:30.054 } 00:20:30.054 ] 00:20:30.054 }' 00:20:30.054 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:30.054 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:30.054 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.054 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:30.054 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:30.054 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.054 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:30.054 [2024-11-06 09:13:06.581445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.312 [2024-11-06 09:13:06.624624] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:30.312 [2024-11-06 09:13:06.625060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.312 [2024-11-06 09:13:06.625102] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:30.312 [2024-11-06 09:13:06.625120] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.312 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.312 "name": "raid_bdev1", 00:20:30.312 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:30.312 "strip_size_kb": 0, 00:20:30.312 "state": "online", 00:20:30.312 "raid_level": "raid1", 00:20:30.312 "superblock": true, 00:20:30.312 "num_base_bdevs": 4, 00:20:30.312 "num_base_bdevs_discovered": 2, 00:20:30.312 "num_base_bdevs_operational": 2, 00:20:30.312 "base_bdevs_list": [ 00:20:30.312 { 00:20:30.312 "name": null, 00:20:30.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.312 "is_configured": false, 00:20:30.312 "data_offset": 0, 00:20:30.312 "data_size": 63488 00:20:30.312 }, 00:20:30.312 { 00:20:30.312 "name": null, 00:20:30.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.312 "is_configured": false, 00:20:30.312 "data_offset": 2048, 00:20:30.312 "data_size": 63488 00:20:30.312 }, 00:20:30.312 { 00:20:30.312 "name": "BaseBdev3", 00:20:30.312 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:30.312 "is_configured": true, 00:20:30.313 "data_offset": 2048, 00:20:30.313 "data_size": 63488 00:20:30.313 }, 00:20:30.313 { 00:20:30.313 "name": "BaseBdev4", 00:20:30.313 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:30.313 "is_configured": true, 00:20:30.313 "data_offset": 2048, 00:20:30.313 "data_size": 63488 00:20:30.313 } 00:20:30.313 ] 00:20:30.313 }' 00:20:30.313 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.313 09:13:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:30.881 09:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:30.881 09:13:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.881 09:13:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:30.881 [2024-11-06 09:13:07.208844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:30.881 [2024-11-06 09:13:07.209083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.881 [2024-11-06 09:13:07.209131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:30.881 [2024-11-06 09:13:07.209151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.881 [2024-11-06 09:13:07.209752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.881 [2024-11-06 09:13:07.209790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:30.881 [2024-11-06 09:13:07.209922] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:30.881 [2024-11-06 09:13:07.209955] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:20:30.881 [2024-11-06 09:13:07.209969] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:30.881 [2024-11-06 09:13:07.210004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:30.881 spare 00:20:30.881 [2024-11-06 09:13:07.224205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:20:30.881 09:13:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.881 09:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:30.881 [2024-11-06 09:13:07.226855] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:31.817 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.817 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:31.817 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:31.817 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:31.817 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:31.817 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.817 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.817 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.817 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:31.817 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.817 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:31.817 "name": "raid_bdev1", 00:20:31.817 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:31.817 "strip_size_kb": 0, 00:20:31.817 "state": "online", 00:20:31.817 "raid_level": "raid1", 00:20:31.817 "superblock": true, 00:20:31.818 "num_base_bdevs": 4, 00:20:31.818 "num_base_bdevs_discovered": 3, 00:20:31.818 "num_base_bdevs_operational": 3, 00:20:31.818 "process": { 00:20:31.818 "type": "rebuild", 00:20:31.818 "target": "spare", 00:20:31.818 "progress": { 00:20:31.818 "blocks": 20480, 00:20:31.818 "percent": 32 00:20:31.818 } 00:20:31.818 }, 00:20:31.818 "base_bdevs_list": [ 00:20:31.818 { 00:20:31.818 "name": "spare", 00:20:31.818 "uuid": "c199a20b-b36d-586a-8f54-512df575f8d1", 00:20:31.818 "is_configured": true, 00:20:31.818 "data_offset": 2048, 00:20:31.818 "data_size": 63488 00:20:31.818 }, 00:20:31.818 { 00:20:31.818 "name": null, 00:20:31.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.818 "is_configured": false, 00:20:31.818 "data_offset": 2048, 00:20:31.818 "data_size": 63488 00:20:31.818 }, 00:20:31.818 { 00:20:31.818 "name": "BaseBdev3", 00:20:31.818 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:31.818 "is_configured": true, 00:20:31.818 "data_offset": 2048, 00:20:31.818 "data_size": 63488 00:20:31.818 }, 00:20:31.818 { 00:20:31.818 "name": "BaseBdev4", 00:20:31.818 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:31.818 "is_configured": true, 00:20:31.818 "data_offset": 2048, 00:20:31.818 "data_size": 63488 00:20:31.818 } 00:20:31.818 ] 00:20:31.818 }' 00:20:31.818 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:31.818 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.818 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.076 [2024-11-06 09:13:08.404456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.076 [2024-11-06 09:13:08.436442] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:32.076 [2024-11-06 09:13:08.436718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.076 [2024-11-06 09:13:08.436757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.076 [2024-11-06 09:13:08.436771] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.076 "name": "raid_bdev1", 00:20:32.076 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:32.076 "strip_size_kb": 0, 00:20:32.076 "state": "online", 00:20:32.076 "raid_level": "raid1", 00:20:32.076 "superblock": true, 00:20:32.076 "num_base_bdevs": 4, 00:20:32.076 "num_base_bdevs_discovered": 2, 00:20:32.076 "num_base_bdevs_operational": 2, 00:20:32.076 "base_bdevs_list": [ 00:20:32.076 { 00:20:32.076 "name": null, 00:20:32.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.076 "is_configured": false, 00:20:32.076 "data_offset": 0, 00:20:32.076 "data_size": 63488 00:20:32.076 }, 00:20:32.076 { 00:20:32.076 "name": null, 00:20:32.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.076 "is_configured": false, 00:20:32.076 "data_offset": 2048, 00:20:32.076 "data_size": 63488 00:20:32.076 }, 00:20:32.076 { 00:20:32.076 "name": "BaseBdev3", 00:20:32.076 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:32.076 "is_configured": true, 00:20:32.076 "data_offset": 2048, 00:20:32.076 "data_size": 63488 00:20:32.076 }, 00:20:32.076 { 00:20:32.076 "name": "BaseBdev4", 00:20:32.076 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:32.076 "is_configured": true, 00:20:32.076 "data_offset": 2048, 00:20:32.076 "data_size": 63488 00:20:32.076 } 00:20:32.076 ] 00:20:32.076 }' 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.076 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.643 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.643 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.643 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:32.643 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:32.643 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.643 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.643 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.643 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.643 09:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.643 "name": "raid_bdev1", 00:20:32.643 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:32.643 "strip_size_kb": 0, 00:20:32.643 "state": "online", 00:20:32.643 "raid_level": "raid1", 00:20:32.643 "superblock": true, 00:20:32.643 "num_base_bdevs": 4, 00:20:32.643 "num_base_bdevs_discovered": 2, 00:20:32.643 "num_base_bdevs_operational": 2, 00:20:32.643 "base_bdevs_list": [ 00:20:32.643 { 00:20:32.643 "name": null, 00:20:32.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.643 "is_configured": false, 00:20:32.643 "data_offset": 0, 00:20:32.643 "data_size": 63488 00:20:32.643 }, 00:20:32.643 { 00:20:32.643 "name": null, 00:20:32.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.643 "is_configured": false, 00:20:32.643 "data_offset": 2048, 00:20:32.643 "data_size": 63488 00:20:32.643 }, 00:20:32.643 { 00:20:32.643 "name": "BaseBdev3", 00:20:32.643 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:32.643 "is_configured": true, 00:20:32.643 "data_offset": 2048, 00:20:32.643 "data_size": 63488 00:20:32.643 }, 00:20:32.643 { 00:20:32.643 "name": "BaseBdev4", 00:20:32.643 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:32.643 "is_configured": true, 00:20:32.643 "data_offset": 2048, 00:20:32.643 "data_size": 63488 00:20:32.643 } 00:20:32.643 ] 00:20:32.643 }' 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:32.643 [2024-11-06 09:13:09.157284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:32.643 [2024-11-06 09:13:09.157361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.643 [2024-11-06 09:13:09.157400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:20:32.643 [2024-11-06 09:13:09.157415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.643 [2024-11-06 09:13:09.157983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.643 [2024-11-06 09:13:09.158024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:32.643 [2024-11-06 09:13:09.158135] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:32.643 [2024-11-06 09:13:09.158157] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:32.643 [2024-11-06 09:13:09.158172] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:32.643 [2024-11-06 09:13:09.158185] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:32.643 BaseBdev1 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.643 09:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.020 "name": "raid_bdev1", 00:20:34.020 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:34.020 "strip_size_kb": 0, 00:20:34.020 "state": "online", 00:20:34.020 "raid_level": "raid1", 00:20:34.020 "superblock": true, 00:20:34.020 "num_base_bdevs": 4, 00:20:34.020 "num_base_bdevs_discovered": 2, 00:20:34.020 "num_base_bdevs_operational": 2, 00:20:34.020 "base_bdevs_list": [ 00:20:34.020 { 00:20:34.020 "name": null, 00:20:34.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.020 "is_configured": false, 00:20:34.020 "data_offset": 0, 00:20:34.020 "data_size": 63488 00:20:34.020 }, 00:20:34.020 { 00:20:34.020 "name": null, 00:20:34.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.020 "is_configured": false, 00:20:34.020 "data_offset": 2048, 00:20:34.020 "data_size": 63488 00:20:34.020 }, 00:20:34.020 { 00:20:34.020 "name": "BaseBdev3", 00:20:34.020 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:34.020 "is_configured": true, 00:20:34.020 "data_offset": 2048, 00:20:34.020 "data_size": 63488 00:20:34.020 }, 00:20:34.020 { 00:20:34.020 "name": "BaseBdev4", 00:20:34.020 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:34.020 "is_configured": true, 00:20:34.020 "data_offset": 2048, 00:20:34.020 "data_size": 63488 00:20:34.020 } 00:20:34.020 ] 00:20:34.020 }' 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.020 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.279 "name": "raid_bdev1", 00:20:34.279 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:34.279 "strip_size_kb": 0, 00:20:34.279 "state": "online", 00:20:34.279 "raid_level": "raid1", 00:20:34.279 "superblock": true, 00:20:34.279 "num_base_bdevs": 4, 00:20:34.279 "num_base_bdevs_discovered": 2, 00:20:34.279 "num_base_bdevs_operational": 2, 00:20:34.279 "base_bdevs_list": [ 00:20:34.279 { 00:20:34.279 "name": null, 00:20:34.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.279 "is_configured": false, 00:20:34.279 "data_offset": 0, 00:20:34.279 "data_size": 63488 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "name": null, 00:20:34.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.279 "is_configured": false, 00:20:34.279 "data_offset": 2048, 00:20:34.279 "data_size": 63488 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "name": "BaseBdev3", 00:20:34.279 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:34.279 "is_configured": true, 00:20:34.279 "data_offset": 2048, 00:20:34.279 "data_size": 63488 00:20:34.279 }, 00:20:34.279 { 00:20:34.279 "name": "BaseBdev4", 00:20:34.279 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:34.279 "is_configured": true, 00:20:34.279 "data_offset": 2048, 00:20:34.279 "data_size": 63488 00:20:34.279 } 00:20:34.279 ] 00:20:34.279 }' 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:34.279 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.537 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:34.537 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:34.537 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:20:34.537 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:34.537 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:34.537 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.537 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:34.537 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.537 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:34.537 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.537 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:34.537 [2024-11-06 09:13:10.846276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:34.537 [2024-11-06 09:13:10.846689] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:20:34.537 [2024-11-06 09:13:10.846724] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:34.537 request: 00:20:34.537 { 00:20:34.537 "base_bdev": "BaseBdev1", 00:20:34.537 "raid_bdev": "raid_bdev1", 00:20:34.538 "method": "bdev_raid_add_base_bdev", 00:20:34.538 "req_id": 1 00:20:34.538 } 00:20:34.538 Got JSON-RPC error response 00:20:34.538 response: 00:20:34.538 { 00:20:34.538 "code": -22, 00:20:34.538 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:34.538 } 00:20:34.538 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:34.538 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:20:34.538 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:34.538 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:34.538 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:34.538 09:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.488 "name": "raid_bdev1", 00:20:35.488 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:35.488 "strip_size_kb": 0, 00:20:35.488 "state": "online", 00:20:35.488 "raid_level": "raid1", 00:20:35.488 "superblock": true, 00:20:35.488 "num_base_bdevs": 4, 00:20:35.488 "num_base_bdevs_discovered": 2, 00:20:35.488 "num_base_bdevs_operational": 2, 00:20:35.488 "base_bdevs_list": [ 00:20:35.488 { 00:20:35.488 "name": null, 00:20:35.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.488 "is_configured": false, 00:20:35.488 "data_offset": 0, 00:20:35.488 "data_size": 63488 00:20:35.488 }, 00:20:35.488 { 00:20:35.488 "name": null, 00:20:35.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.488 "is_configured": false, 00:20:35.488 "data_offset": 2048, 00:20:35.488 "data_size": 63488 00:20:35.488 }, 00:20:35.488 { 00:20:35.488 "name": "BaseBdev3", 00:20:35.488 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:35.488 "is_configured": true, 00:20:35.488 "data_offset": 2048, 00:20:35.488 "data_size": 63488 00:20:35.488 }, 00:20:35.488 { 00:20:35.488 "name": "BaseBdev4", 00:20:35.488 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:35.488 "is_configured": true, 00:20:35.488 "data_offset": 2048, 00:20:35.488 "data_size": 63488 00:20:35.488 } 00:20:35.488 ] 00:20:35.488 }' 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.488 09:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:36.054 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:36.054 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.054 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:36.054 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:36.054 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.054 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.054 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.054 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.054 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:36.054 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.054 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.054 "name": "raid_bdev1", 00:20:36.054 "uuid": "9b1caf5c-b3bf-47c8-b469-b119b194b7b5", 00:20:36.054 "strip_size_kb": 0, 00:20:36.054 "state": "online", 00:20:36.054 "raid_level": "raid1", 00:20:36.054 "superblock": true, 00:20:36.054 "num_base_bdevs": 4, 00:20:36.054 "num_base_bdevs_discovered": 2, 00:20:36.054 "num_base_bdevs_operational": 2, 00:20:36.054 "base_bdevs_list": [ 00:20:36.054 { 00:20:36.054 "name": null, 00:20:36.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.054 "is_configured": false, 00:20:36.054 "data_offset": 0, 00:20:36.054 "data_size": 63488 00:20:36.054 }, 00:20:36.054 { 00:20:36.054 "name": null, 00:20:36.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.054 "is_configured": false, 00:20:36.054 "data_offset": 2048, 00:20:36.054 "data_size": 63488 00:20:36.055 }, 00:20:36.055 { 00:20:36.055 "name": "BaseBdev3", 00:20:36.055 "uuid": "5ef037dc-eeae-5b0d-8c73-1749e2a34d47", 00:20:36.055 "is_configured": true, 00:20:36.055 "data_offset": 2048, 00:20:36.055 "data_size": 63488 00:20:36.055 }, 00:20:36.055 { 00:20:36.055 "name": "BaseBdev4", 00:20:36.055 "uuid": "4c29e836-9f85-5ced-9afe-4b806bf32860", 00:20:36.055 "is_configured": true, 00:20:36.055 "data_offset": 2048, 00:20:36.055 "data_size": 63488 00:20:36.055 } 00:20:36.055 ] 00:20:36.055 }' 00:20:36.055 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.055 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:36.055 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.055 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:36.055 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79401 00:20:36.055 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 79401 ']' 00:20:36.055 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 79401 00:20:36.055 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:20:36.055 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.055 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79401 00:20:36.314 killing process with pid 79401 00:20:36.314 Received shutdown signal, test time was about 19.304375 seconds 00:20:36.314 00:20:36.314 Latency(us) 00:20:36.314 [2024-11-06T09:13:12.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.314 [2024-11-06T09:13:12.849Z] =================================================================================================================== 00:20:36.314 [2024-11-06T09:13:12.849Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.314 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:36.314 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:36.314 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79401' 00:20:36.314 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 79401 00:20:36.314 [2024-11-06 09:13:12.608227] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:36.314 09:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 79401 00:20:36.314 [2024-11-06 09:13:12.608383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:36.314 [2024-11-06 09:13:12.608473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:36.314 [2024-11-06 09:13:12.608497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:36.573 [2024-11-06 09:13:12.996324] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:37.994 09:13:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:20:37.994 00:20:37.994 real 0m22.962s 00:20:37.994 user 0m31.468s 00:20:37.994 sys 0m2.332s 00:20:37.994 09:13:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:37.994 ************************************ 00:20:37.994 END TEST raid_rebuild_test_sb_io 00:20:37.994 ************************************ 00:20:37.994 09:13:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:20:37.994 09:13:14 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:20:37.994 09:13:14 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:20:37.994 09:13:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:37.994 09:13:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:37.994 09:13:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:37.994 ************************************ 00:20:37.994 START TEST raid5f_state_function_test 00:20:37.994 ************************************ 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80135 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:37.994 Process raid pid: 80135 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80135' 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80135 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 80135 ']' 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:37.994 09:13:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.994 [2024-11-06 09:13:14.267645] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:20:37.994 [2024-11-06 09:13:14.267947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.994 [2024-11-06 09:13:14.445693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.253 [2024-11-06 09:13:14.592356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.512 [2024-11-06 09:13:14.807244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:38.512 [2024-11-06 09:13:14.807505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.148 [2024-11-06 09:13:15.319000] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:39.148 [2024-11-06 09:13:15.319207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:39.148 [2024-11-06 09:13:15.319366] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:39.148 [2024-11-06 09:13:15.319404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:39.148 [2024-11-06 09:13:15.319418] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:39.148 [2024-11-06 09:13:15.319433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.148 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.148 "name": "Existed_Raid", 00:20:39.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.148 "strip_size_kb": 64, 00:20:39.148 "state": "configuring", 00:20:39.148 "raid_level": "raid5f", 00:20:39.148 "superblock": false, 00:20:39.148 "num_base_bdevs": 3, 00:20:39.148 "num_base_bdevs_discovered": 0, 00:20:39.148 "num_base_bdevs_operational": 3, 00:20:39.148 "base_bdevs_list": [ 00:20:39.148 { 00:20:39.148 "name": "BaseBdev1", 00:20:39.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.148 "is_configured": false, 00:20:39.148 "data_offset": 0, 00:20:39.148 "data_size": 0 00:20:39.148 }, 00:20:39.148 { 00:20:39.148 "name": "BaseBdev2", 00:20:39.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.148 "is_configured": false, 00:20:39.148 "data_offset": 0, 00:20:39.148 "data_size": 0 00:20:39.148 }, 00:20:39.148 { 00:20:39.148 "name": "BaseBdev3", 00:20:39.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.148 "is_configured": false, 00:20:39.149 "data_offset": 0, 00:20:39.149 "data_size": 0 00:20:39.149 } 00:20:39.149 ] 00:20:39.149 }' 00:20:39.149 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.149 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.408 [2024-11-06 09:13:15.867090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:39.408 [2024-11-06 09:13:15.867300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.408 [2024-11-06 09:13:15.879067] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:39.408 [2024-11-06 09:13:15.879245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:39.408 [2024-11-06 09:13:15.879372] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:39.408 [2024-11-06 09:13:15.879407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:39.408 [2024-11-06 09:13:15.879419] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:39.408 [2024-11-06 09:13:15.879433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.408 [2024-11-06 09:13:15.927996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:39.408 BaseBdev1 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.408 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.667 [ 00:20:39.667 { 00:20:39.667 "name": "BaseBdev1", 00:20:39.667 "aliases": [ 00:20:39.667 "58ace0ea-0359-4b30-acdc-0dec768df4a4" 00:20:39.667 ], 00:20:39.667 "product_name": "Malloc disk", 00:20:39.667 "block_size": 512, 00:20:39.667 "num_blocks": 65536, 00:20:39.668 "uuid": "58ace0ea-0359-4b30-acdc-0dec768df4a4", 00:20:39.668 "assigned_rate_limits": { 00:20:39.668 "rw_ios_per_sec": 0, 00:20:39.668 "rw_mbytes_per_sec": 0, 00:20:39.668 "r_mbytes_per_sec": 0, 00:20:39.668 "w_mbytes_per_sec": 0 00:20:39.668 }, 00:20:39.668 "claimed": true, 00:20:39.668 "claim_type": "exclusive_write", 00:20:39.668 "zoned": false, 00:20:39.668 "supported_io_types": { 00:20:39.668 "read": true, 00:20:39.668 "write": true, 00:20:39.668 "unmap": true, 00:20:39.668 "flush": true, 00:20:39.668 "reset": true, 00:20:39.668 "nvme_admin": false, 00:20:39.668 "nvme_io": false, 00:20:39.668 "nvme_io_md": false, 00:20:39.668 "write_zeroes": true, 00:20:39.668 "zcopy": true, 00:20:39.668 "get_zone_info": false, 00:20:39.668 "zone_management": false, 00:20:39.668 "zone_append": false, 00:20:39.668 "compare": false, 00:20:39.668 "compare_and_write": false, 00:20:39.668 "abort": true, 00:20:39.668 "seek_hole": false, 00:20:39.668 "seek_data": false, 00:20:39.668 "copy": true, 00:20:39.668 "nvme_iov_md": false 00:20:39.668 }, 00:20:39.668 "memory_domains": [ 00:20:39.668 { 00:20:39.668 "dma_device_id": "system", 00:20:39.668 "dma_device_type": 1 00:20:39.668 }, 00:20:39.668 { 00:20:39.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.668 "dma_device_type": 2 00:20:39.668 } 00:20:39.668 ], 00:20:39.668 "driver_specific": {} 00:20:39.668 } 00:20:39.668 ] 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.668 09:13:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.668 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.668 "name": "Existed_Raid", 00:20:39.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.668 "strip_size_kb": 64, 00:20:39.668 "state": "configuring", 00:20:39.668 "raid_level": "raid5f", 00:20:39.668 "superblock": false, 00:20:39.668 "num_base_bdevs": 3, 00:20:39.668 "num_base_bdevs_discovered": 1, 00:20:39.668 "num_base_bdevs_operational": 3, 00:20:39.668 "base_bdevs_list": [ 00:20:39.668 { 00:20:39.668 "name": "BaseBdev1", 00:20:39.668 "uuid": "58ace0ea-0359-4b30-acdc-0dec768df4a4", 00:20:39.668 "is_configured": true, 00:20:39.668 "data_offset": 0, 00:20:39.668 "data_size": 65536 00:20:39.668 }, 00:20:39.668 { 00:20:39.668 "name": "BaseBdev2", 00:20:39.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.668 "is_configured": false, 00:20:39.668 "data_offset": 0, 00:20:39.668 "data_size": 0 00:20:39.668 }, 00:20:39.668 { 00:20:39.668 "name": "BaseBdev3", 00:20:39.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.668 "is_configured": false, 00:20:39.668 "data_offset": 0, 00:20:39.668 "data_size": 0 00:20:39.668 } 00:20:39.668 ] 00:20:39.668 }' 00:20:39.668 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.668 09:13:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.235 [2024-11-06 09:13:16.484220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:40.235 [2024-11-06 09:13:16.484287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.235 [2024-11-06 09:13:16.492283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:40.235 [2024-11-06 09:13:16.494897] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:40.235 [2024-11-06 09:13:16.495077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:40.235 [2024-11-06 09:13:16.495208] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:40.235 [2024-11-06 09:13:16.495271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.235 "name": "Existed_Raid", 00:20:40.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.235 "strip_size_kb": 64, 00:20:40.235 "state": "configuring", 00:20:40.235 "raid_level": "raid5f", 00:20:40.235 "superblock": false, 00:20:40.235 "num_base_bdevs": 3, 00:20:40.235 "num_base_bdevs_discovered": 1, 00:20:40.235 "num_base_bdevs_operational": 3, 00:20:40.235 "base_bdevs_list": [ 00:20:40.235 { 00:20:40.235 "name": "BaseBdev1", 00:20:40.235 "uuid": "58ace0ea-0359-4b30-acdc-0dec768df4a4", 00:20:40.235 "is_configured": true, 00:20:40.235 "data_offset": 0, 00:20:40.235 "data_size": 65536 00:20:40.235 }, 00:20:40.235 { 00:20:40.235 "name": "BaseBdev2", 00:20:40.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.235 "is_configured": false, 00:20:40.235 "data_offset": 0, 00:20:40.235 "data_size": 0 00:20:40.235 }, 00:20:40.235 { 00:20:40.235 "name": "BaseBdev3", 00:20:40.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.235 "is_configured": false, 00:20:40.235 "data_offset": 0, 00:20:40.235 "data_size": 0 00:20:40.235 } 00:20:40.235 ] 00:20:40.235 }' 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.235 09:13:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.495 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:40.495 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.495 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.754 BaseBdev2 00:20:40.754 [2024-11-06 09:13:17.051213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.754 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.755 [ 00:20:40.755 { 00:20:40.755 "name": "BaseBdev2", 00:20:40.755 "aliases": [ 00:20:40.755 "0c03cd5d-cdfb-4346-8bdc-922966065efb" 00:20:40.755 ], 00:20:40.755 "product_name": "Malloc disk", 00:20:40.755 "block_size": 512, 00:20:40.755 "num_blocks": 65536, 00:20:40.755 "uuid": "0c03cd5d-cdfb-4346-8bdc-922966065efb", 00:20:40.755 "assigned_rate_limits": { 00:20:40.755 "rw_ios_per_sec": 0, 00:20:40.755 "rw_mbytes_per_sec": 0, 00:20:40.755 "r_mbytes_per_sec": 0, 00:20:40.755 "w_mbytes_per_sec": 0 00:20:40.755 }, 00:20:40.755 "claimed": true, 00:20:40.755 "claim_type": "exclusive_write", 00:20:40.755 "zoned": false, 00:20:40.755 "supported_io_types": { 00:20:40.755 "read": true, 00:20:40.755 "write": true, 00:20:40.755 "unmap": true, 00:20:40.755 "flush": true, 00:20:40.755 "reset": true, 00:20:40.755 "nvme_admin": false, 00:20:40.755 "nvme_io": false, 00:20:40.755 "nvme_io_md": false, 00:20:40.755 "write_zeroes": true, 00:20:40.755 "zcopy": true, 00:20:40.755 "get_zone_info": false, 00:20:40.755 "zone_management": false, 00:20:40.755 "zone_append": false, 00:20:40.755 "compare": false, 00:20:40.755 "compare_and_write": false, 00:20:40.755 "abort": true, 00:20:40.755 "seek_hole": false, 00:20:40.755 "seek_data": false, 00:20:40.755 "copy": true, 00:20:40.755 "nvme_iov_md": false 00:20:40.755 }, 00:20:40.755 "memory_domains": [ 00:20:40.755 { 00:20:40.755 "dma_device_id": "system", 00:20:40.755 "dma_device_type": 1 00:20:40.755 }, 00:20:40.755 { 00:20:40.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.755 "dma_device_type": 2 00:20:40.755 } 00:20:40.755 ], 00:20:40.755 "driver_specific": {} 00:20:40.755 } 00:20:40.755 ] 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.755 "name": "Existed_Raid", 00:20:40.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.755 "strip_size_kb": 64, 00:20:40.755 "state": "configuring", 00:20:40.755 "raid_level": "raid5f", 00:20:40.755 "superblock": false, 00:20:40.755 "num_base_bdevs": 3, 00:20:40.755 "num_base_bdevs_discovered": 2, 00:20:40.755 "num_base_bdevs_operational": 3, 00:20:40.755 "base_bdevs_list": [ 00:20:40.755 { 00:20:40.755 "name": "BaseBdev1", 00:20:40.755 "uuid": "58ace0ea-0359-4b30-acdc-0dec768df4a4", 00:20:40.755 "is_configured": true, 00:20:40.755 "data_offset": 0, 00:20:40.755 "data_size": 65536 00:20:40.755 }, 00:20:40.755 { 00:20:40.755 "name": "BaseBdev2", 00:20:40.755 "uuid": "0c03cd5d-cdfb-4346-8bdc-922966065efb", 00:20:40.755 "is_configured": true, 00:20:40.755 "data_offset": 0, 00:20:40.755 "data_size": 65536 00:20:40.755 }, 00:20:40.755 { 00:20:40.755 "name": "BaseBdev3", 00:20:40.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.755 "is_configured": false, 00:20:40.755 "data_offset": 0, 00:20:40.755 "data_size": 0 00:20:40.755 } 00:20:40.755 ] 00:20:40.755 }' 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.755 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.322 [2024-11-06 09:13:17.649371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:41.322 [2024-11-06 09:13:17.649448] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:41.322 [2024-11-06 09:13:17.649469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:41.322 [2024-11-06 09:13:17.649878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:41.322 [2024-11-06 09:13:17.655200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:41.322 [2024-11-06 09:13:17.655363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:41.322 [2024-11-06 09:13:17.655736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.322 BaseBdev3 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.322 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.322 [ 00:20:41.322 { 00:20:41.322 "name": "BaseBdev3", 00:20:41.322 "aliases": [ 00:20:41.322 "90b470e8-bd11-4c0e-9f23-71897328072e" 00:20:41.322 ], 00:20:41.322 "product_name": "Malloc disk", 00:20:41.322 "block_size": 512, 00:20:41.322 "num_blocks": 65536, 00:20:41.322 "uuid": "90b470e8-bd11-4c0e-9f23-71897328072e", 00:20:41.322 "assigned_rate_limits": { 00:20:41.322 "rw_ios_per_sec": 0, 00:20:41.322 "rw_mbytes_per_sec": 0, 00:20:41.322 "r_mbytes_per_sec": 0, 00:20:41.322 "w_mbytes_per_sec": 0 00:20:41.322 }, 00:20:41.322 "claimed": true, 00:20:41.322 "claim_type": "exclusive_write", 00:20:41.322 "zoned": false, 00:20:41.322 "supported_io_types": { 00:20:41.322 "read": true, 00:20:41.322 "write": true, 00:20:41.322 "unmap": true, 00:20:41.322 "flush": true, 00:20:41.322 "reset": true, 00:20:41.322 "nvme_admin": false, 00:20:41.322 "nvme_io": false, 00:20:41.322 "nvme_io_md": false, 00:20:41.322 "write_zeroes": true, 00:20:41.322 "zcopy": true, 00:20:41.322 "get_zone_info": false, 00:20:41.322 "zone_management": false, 00:20:41.322 "zone_append": false, 00:20:41.322 "compare": false, 00:20:41.322 "compare_and_write": false, 00:20:41.322 "abort": true, 00:20:41.322 "seek_hole": false, 00:20:41.322 "seek_data": false, 00:20:41.322 "copy": true, 00:20:41.322 "nvme_iov_md": false 00:20:41.322 }, 00:20:41.322 "memory_domains": [ 00:20:41.322 { 00:20:41.322 "dma_device_id": "system", 00:20:41.322 "dma_device_type": 1 00:20:41.322 }, 00:20:41.322 { 00:20:41.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.322 "dma_device_type": 2 00:20:41.323 } 00:20:41.323 ], 00:20:41.323 "driver_specific": {} 00:20:41.323 } 00:20:41.323 ] 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.323 "name": "Existed_Raid", 00:20:41.323 "uuid": "e2dca475-06bd-4618-95b7-d5a67d5aa969", 00:20:41.323 "strip_size_kb": 64, 00:20:41.323 "state": "online", 00:20:41.323 "raid_level": "raid5f", 00:20:41.323 "superblock": false, 00:20:41.323 "num_base_bdevs": 3, 00:20:41.323 "num_base_bdevs_discovered": 3, 00:20:41.323 "num_base_bdevs_operational": 3, 00:20:41.323 "base_bdevs_list": [ 00:20:41.323 { 00:20:41.323 "name": "BaseBdev1", 00:20:41.323 "uuid": "58ace0ea-0359-4b30-acdc-0dec768df4a4", 00:20:41.323 "is_configured": true, 00:20:41.323 "data_offset": 0, 00:20:41.323 "data_size": 65536 00:20:41.323 }, 00:20:41.323 { 00:20:41.323 "name": "BaseBdev2", 00:20:41.323 "uuid": "0c03cd5d-cdfb-4346-8bdc-922966065efb", 00:20:41.323 "is_configured": true, 00:20:41.323 "data_offset": 0, 00:20:41.323 "data_size": 65536 00:20:41.323 }, 00:20:41.323 { 00:20:41.323 "name": "BaseBdev3", 00:20:41.323 "uuid": "90b470e8-bd11-4c0e-9f23-71897328072e", 00:20:41.323 "is_configured": true, 00:20:41.323 "data_offset": 0, 00:20:41.323 "data_size": 65536 00:20:41.323 } 00:20:41.323 ] 00:20:41.323 }' 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.323 09:13:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.889 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:41.889 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:41.889 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:41.889 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:41.889 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:41.889 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:41.889 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:41.889 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.889 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:41.889 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.889 [2024-11-06 09:13:18.237702] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:41.889 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.889 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:41.889 "name": "Existed_Raid", 00:20:41.889 "aliases": [ 00:20:41.889 "e2dca475-06bd-4618-95b7-d5a67d5aa969" 00:20:41.889 ], 00:20:41.889 "product_name": "Raid Volume", 00:20:41.889 "block_size": 512, 00:20:41.889 "num_blocks": 131072, 00:20:41.889 "uuid": "e2dca475-06bd-4618-95b7-d5a67d5aa969", 00:20:41.889 "assigned_rate_limits": { 00:20:41.889 "rw_ios_per_sec": 0, 00:20:41.889 "rw_mbytes_per_sec": 0, 00:20:41.889 "r_mbytes_per_sec": 0, 00:20:41.889 "w_mbytes_per_sec": 0 00:20:41.889 }, 00:20:41.889 "claimed": false, 00:20:41.889 "zoned": false, 00:20:41.889 "supported_io_types": { 00:20:41.889 "read": true, 00:20:41.889 "write": true, 00:20:41.889 "unmap": false, 00:20:41.889 "flush": false, 00:20:41.889 "reset": true, 00:20:41.889 "nvme_admin": false, 00:20:41.889 "nvme_io": false, 00:20:41.889 "nvme_io_md": false, 00:20:41.889 "write_zeroes": true, 00:20:41.889 "zcopy": false, 00:20:41.889 "get_zone_info": false, 00:20:41.889 "zone_management": false, 00:20:41.889 "zone_append": false, 00:20:41.889 "compare": false, 00:20:41.889 "compare_and_write": false, 00:20:41.889 "abort": false, 00:20:41.889 "seek_hole": false, 00:20:41.889 "seek_data": false, 00:20:41.889 "copy": false, 00:20:41.889 "nvme_iov_md": false 00:20:41.889 }, 00:20:41.889 "driver_specific": { 00:20:41.889 "raid": { 00:20:41.889 "uuid": "e2dca475-06bd-4618-95b7-d5a67d5aa969", 00:20:41.889 "strip_size_kb": 64, 00:20:41.889 "state": "online", 00:20:41.889 "raid_level": "raid5f", 00:20:41.889 "superblock": false, 00:20:41.889 "num_base_bdevs": 3, 00:20:41.889 "num_base_bdevs_discovered": 3, 00:20:41.889 "num_base_bdevs_operational": 3, 00:20:41.889 "base_bdevs_list": [ 00:20:41.889 { 00:20:41.889 "name": "BaseBdev1", 00:20:41.889 "uuid": "58ace0ea-0359-4b30-acdc-0dec768df4a4", 00:20:41.889 "is_configured": true, 00:20:41.889 "data_offset": 0, 00:20:41.889 "data_size": 65536 00:20:41.889 }, 00:20:41.889 { 00:20:41.889 "name": "BaseBdev2", 00:20:41.889 "uuid": "0c03cd5d-cdfb-4346-8bdc-922966065efb", 00:20:41.889 "is_configured": true, 00:20:41.889 "data_offset": 0, 00:20:41.889 "data_size": 65536 00:20:41.889 }, 00:20:41.889 { 00:20:41.889 "name": "BaseBdev3", 00:20:41.889 "uuid": "90b470e8-bd11-4c0e-9f23-71897328072e", 00:20:41.889 "is_configured": true, 00:20:41.889 "data_offset": 0, 00:20:41.889 "data_size": 65536 00:20:41.890 } 00:20:41.890 ] 00:20:41.890 } 00:20:41.890 } 00:20:41.890 }' 00:20:41.890 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:41.890 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:41.890 BaseBdev2 00:20:41.890 BaseBdev3' 00:20:41.890 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.890 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:41.890 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:41.890 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:41.890 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:41.890 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.890 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.890 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.148 [2024-11-06 09:13:18.553574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.148 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.406 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:42.406 "name": "Existed_Raid", 00:20:42.406 "uuid": "e2dca475-06bd-4618-95b7-d5a67d5aa969", 00:20:42.406 "strip_size_kb": 64, 00:20:42.406 "state": "online", 00:20:42.406 "raid_level": "raid5f", 00:20:42.406 "superblock": false, 00:20:42.406 "num_base_bdevs": 3, 00:20:42.406 "num_base_bdevs_discovered": 2, 00:20:42.406 "num_base_bdevs_operational": 2, 00:20:42.406 "base_bdevs_list": [ 00:20:42.406 { 00:20:42.406 "name": null, 00:20:42.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.406 "is_configured": false, 00:20:42.406 "data_offset": 0, 00:20:42.406 "data_size": 65536 00:20:42.406 }, 00:20:42.406 { 00:20:42.406 "name": "BaseBdev2", 00:20:42.406 "uuid": "0c03cd5d-cdfb-4346-8bdc-922966065efb", 00:20:42.406 "is_configured": true, 00:20:42.406 "data_offset": 0, 00:20:42.406 "data_size": 65536 00:20:42.406 }, 00:20:42.406 { 00:20:42.406 "name": "BaseBdev3", 00:20:42.406 "uuid": "90b470e8-bd11-4c0e-9f23-71897328072e", 00:20:42.406 "is_configured": true, 00:20:42.406 "data_offset": 0, 00:20:42.406 "data_size": 65536 00:20:42.406 } 00:20:42.406 ] 00:20:42.406 }' 00:20:42.406 09:13:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:42.406 09:13:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.664 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:42.664 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:42.664 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.664 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.664 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.664 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:42.664 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.923 [2024-11-06 09:13:19.214128] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:42.923 [2024-11-06 09:13:19.214258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.923 [2024-11-06 09:13:19.301165] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.923 [2024-11-06 09:13:19.361270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:42.923 [2024-11-06 09:13:19.361336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:42.923 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.182 BaseBdev2 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.182 [ 00:20:43.182 { 00:20:43.182 "name": "BaseBdev2", 00:20:43.182 "aliases": [ 00:20:43.182 "e61e72ab-8ca1-49b7-89d5-8d57562f25d6" 00:20:43.182 ], 00:20:43.182 "product_name": "Malloc disk", 00:20:43.182 "block_size": 512, 00:20:43.182 "num_blocks": 65536, 00:20:43.182 "uuid": "e61e72ab-8ca1-49b7-89d5-8d57562f25d6", 00:20:43.182 "assigned_rate_limits": { 00:20:43.182 "rw_ios_per_sec": 0, 00:20:43.182 "rw_mbytes_per_sec": 0, 00:20:43.182 "r_mbytes_per_sec": 0, 00:20:43.182 "w_mbytes_per_sec": 0 00:20:43.182 }, 00:20:43.182 "claimed": false, 00:20:43.182 "zoned": false, 00:20:43.182 "supported_io_types": { 00:20:43.182 "read": true, 00:20:43.182 "write": true, 00:20:43.182 "unmap": true, 00:20:43.182 "flush": true, 00:20:43.182 "reset": true, 00:20:43.182 "nvme_admin": false, 00:20:43.182 "nvme_io": false, 00:20:43.182 "nvme_io_md": false, 00:20:43.182 "write_zeroes": true, 00:20:43.182 "zcopy": true, 00:20:43.182 "get_zone_info": false, 00:20:43.182 "zone_management": false, 00:20:43.182 "zone_append": false, 00:20:43.182 "compare": false, 00:20:43.182 "compare_and_write": false, 00:20:43.182 "abort": true, 00:20:43.182 "seek_hole": false, 00:20:43.182 "seek_data": false, 00:20:43.182 "copy": true, 00:20:43.182 "nvme_iov_md": false 00:20:43.182 }, 00:20:43.182 "memory_domains": [ 00:20:43.182 { 00:20:43.182 "dma_device_id": "system", 00:20:43.182 "dma_device_type": 1 00:20:43.182 }, 00:20:43.182 { 00:20:43.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.182 "dma_device_type": 2 00:20:43.182 } 00:20:43.182 ], 00:20:43.182 "driver_specific": {} 00:20:43.182 } 00:20:43.182 ] 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.182 BaseBdev3 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.182 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.182 [ 00:20:43.182 { 00:20:43.182 "name": "BaseBdev3", 00:20:43.182 "aliases": [ 00:20:43.182 "e844fb3e-2a13-4389-81e0-3f0efd3b2779" 00:20:43.182 ], 00:20:43.182 "product_name": "Malloc disk", 00:20:43.182 "block_size": 512, 00:20:43.182 "num_blocks": 65536, 00:20:43.182 "uuid": "e844fb3e-2a13-4389-81e0-3f0efd3b2779", 00:20:43.182 "assigned_rate_limits": { 00:20:43.182 "rw_ios_per_sec": 0, 00:20:43.182 "rw_mbytes_per_sec": 0, 00:20:43.182 "r_mbytes_per_sec": 0, 00:20:43.182 "w_mbytes_per_sec": 0 00:20:43.182 }, 00:20:43.182 "claimed": false, 00:20:43.182 "zoned": false, 00:20:43.182 "supported_io_types": { 00:20:43.182 "read": true, 00:20:43.182 "write": true, 00:20:43.182 "unmap": true, 00:20:43.182 "flush": true, 00:20:43.182 "reset": true, 00:20:43.182 "nvme_admin": false, 00:20:43.182 "nvme_io": false, 00:20:43.182 "nvme_io_md": false, 00:20:43.182 "write_zeroes": true, 00:20:43.182 "zcopy": true, 00:20:43.182 "get_zone_info": false, 00:20:43.182 "zone_management": false, 00:20:43.182 "zone_append": false, 00:20:43.182 "compare": false, 00:20:43.182 "compare_and_write": false, 00:20:43.182 "abort": true, 00:20:43.182 "seek_hole": false, 00:20:43.182 "seek_data": false, 00:20:43.182 "copy": true, 00:20:43.182 "nvme_iov_md": false 00:20:43.182 }, 00:20:43.182 "memory_domains": [ 00:20:43.182 { 00:20:43.182 "dma_device_id": "system", 00:20:43.182 "dma_device_type": 1 00:20:43.182 }, 00:20:43.182 { 00:20:43.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.182 "dma_device_type": 2 00:20:43.182 } 00:20:43.182 ], 00:20:43.183 "driver_specific": {} 00:20:43.183 } 00:20:43.183 ] 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.183 [2024-11-06 09:13:19.683195] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:43.183 [2024-11-06 09:13:19.683269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:43.183 [2024-11-06 09:13:19.683324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.183 [2024-11-06 09:13:19.685916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.183 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.450 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.450 "name": "Existed_Raid", 00:20:43.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.450 "strip_size_kb": 64, 00:20:43.450 "state": "configuring", 00:20:43.450 "raid_level": "raid5f", 00:20:43.450 "superblock": false, 00:20:43.450 "num_base_bdevs": 3, 00:20:43.450 "num_base_bdevs_discovered": 2, 00:20:43.450 "num_base_bdevs_operational": 3, 00:20:43.450 "base_bdevs_list": [ 00:20:43.450 { 00:20:43.450 "name": "BaseBdev1", 00:20:43.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.450 "is_configured": false, 00:20:43.450 "data_offset": 0, 00:20:43.450 "data_size": 0 00:20:43.450 }, 00:20:43.450 { 00:20:43.450 "name": "BaseBdev2", 00:20:43.450 "uuid": "e61e72ab-8ca1-49b7-89d5-8d57562f25d6", 00:20:43.450 "is_configured": true, 00:20:43.450 "data_offset": 0, 00:20:43.450 "data_size": 65536 00:20:43.450 }, 00:20:43.450 { 00:20:43.450 "name": "BaseBdev3", 00:20:43.450 "uuid": "e844fb3e-2a13-4389-81e0-3f0efd3b2779", 00:20:43.450 "is_configured": true, 00:20:43.450 "data_offset": 0, 00:20:43.450 "data_size": 65536 00:20:43.450 } 00:20:43.450 ] 00:20:43.450 }' 00:20:43.450 09:13:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.450 09:13:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.708 [2024-11-06 09:13:20.203319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.708 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.967 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.967 "name": "Existed_Raid", 00:20:43.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.967 "strip_size_kb": 64, 00:20:43.967 "state": "configuring", 00:20:43.967 "raid_level": "raid5f", 00:20:43.967 "superblock": false, 00:20:43.967 "num_base_bdevs": 3, 00:20:43.967 "num_base_bdevs_discovered": 1, 00:20:43.967 "num_base_bdevs_operational": 3, 00:20:43.967 "base_bdevs_list": [ 00:20:43.967 { 00:20:43.967 "name": "BaseBdev1", 00:20:43.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.967 "is_configured": false, 00:20:43.967 "data_offset": 0, 00:20:43.967 "data_size": 0 00:20:43.967 }, 00:20:43.967 { 00:20:43.967 "name": null, 00:20:43.967 "uuid": "e61e72ab-8ca1-49b7-89d5-8d57562f25d6", 00:20:43.967 "is_configured": false, 00:20:43.967 "data_offset": 0, 00:20:43.967 "data_size": 65536 00:20:43.967 }, 00:20:43.967 { 00:20:43.967 "name": "BaseBdev3", 00:20:43.967 "uuid": "e844fb3e-2a13-4389-81e0-3f0efd3b2779", 00:20:43.967 "is_configured": true, 00:20:43.967 "data_offset": 0, 00:20:43.967 "data_size": 65536 00:20:43.967 } 00:20:43.967 ] 00:20:43.967 }' 00:20:43.967 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.967 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.225 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.225 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:44.225 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.225 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.225 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.484 [2024-11-06 09:13:20.805952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:44.484 BaseBdev1 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.484 [ 00:20:44.484 { 00:20:44.484 "name": "BaseBdev1", 00:20:44.484 "aliases": [ 00:20:44.484 "7432307a-7314-44a6-b81b-e0f3aa2823e4" 00:20:44.484 ], 00:20:44.484 "product_name": "Malloc disk", 00:20:44.484 "block_size": 512, 00:20:44.484 "num_blocks": 65536, 00:20:44.484 "uuid": "7432307a-7314-44a6-b81b-e0f3aa2823e4", 00:20:44.484 "assigned_rate_limits": { 00:20:44.484 "rw_ios_per_sec": 0, 00:20:44.484 "rw_mbytes_per_sec": 0, 00:20:44.484 "r_mbytes_per_sec": 0, 00:20:44.484 "w_mbytes_per_sec": 0 00:20:44.484 }, 00:20:44.484 "claimed": true, 00:20:44.484 "claim_type": "exclusive_write", 00:20:44.484 "zoned": false, 00:20:44.484 "supported_io_types": { 00:20:44.484 "read": true, 00:20:44.484 "write": true, 00:20:44.484 "unmap": true, 00:20:44.484 "flush": true, 00:20:44.484 "reset": true, 00:20:44.484 "nvme_admin": false, 00:20:44.484 "nvme_io": false, 00:20:44.484 "nvme_io_md": false, 00:20:44.484 "write_zeroes": true, 00:20:44.484 "zcopy": true, 00:20:44.484 "get_zone_info": false, 00:20:44.484 "zone_management": false, 00:20:44.484 "zone_append": false, 00:20:44.484 "compare": false, 00:20:44.484 "compare_and_write": false, 00:20:44.484 "abort": true, 00:20:44.484 "seek_hole": false, 00:20:44.484 "seek_data": false, 00:20:44.484 "copy": true, 00:20:44.484 "nvme_iov_md": false 00:20:44.484 }, 00:20:44.484 "memory_domains": [ 00:20:44.484 { 00:20:44.484 "dma_device_id": "system", 00:20:44.484 "dma_device_type": 1 00:20:44.484 }, 00:20:44.484 { 00:20:44.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.484 "dma_device_type": 2 00:20:44.484 } 00:20:44.484 ], 00:20:44.484 "driver_specific": {} 00:20:44.484 } 00:20:44.484 ] 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.484 "name": "Existed_Raid", 00:20:44.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.484 "strip_size_kb": 64, 00:20:44.484 "state": "configuring", 00:20:44.484 "raid_level": "raid5f", 00:20:44.484 "superblock": false, 00:20:44.484 "num_base_bdevs": 3, 00:20:44.484 "num_base_bdevs_discovered": 2, 00:20:44.484 "num_base_bdevs_operational": 3, 00:20:44.484 "base_bdevs_list": [ 00:20:44.484 { 00:20:44.484 "name": "BaseBdev1", 00:20:44.484 "uuid": "7432307a-7314-44a6-b81b-e0f3aa2823e4", 00:20:44.484 "is_configured": true, 00:20:44.484 "data_offset": 0, 00:20:44.484 "data_size": 65536 00:20:44.484 }, 00:20:44.484 { 00:20:44.484 "name": null, 00:20:44.484 "uuid": "e61e72ab-8ca1-49b7-89d5-8d57562f25d6", 00:20:44.484 "is_configured": false, 00:20:44.484 "data_offset": 0, 00:20:44.484 "data_size": 65536 00:20:44.484 }, 00:20:44.484 { 00:20:44.484 "name": "BaseBdev3", 00:20:44.484 "uuid": "e844fb3e-2a13-4389-81e0-3f0efd3b2779", 00:20:44.484 "is_configured": true, 00:20:44.484 "data_offset": 0, 00:20:44.484 "data_size": 65536 00:20:44.484 } 00:20:44.484 ] 00:20:44.484 }' 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.484 09:13:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.052 [2024-11-06 09:13:21.410168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.052 "name": "Existed_Raid", 00:20:45.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.052 "strip_size_kb": 64, 00:20:45.052 "state": "configuring", 00:20:45.052 "raid_level": "raid5f", 00:20:45.052 "superblock": false, 00:20:45.052 "num_base_bdevs": 3, 00:20:45.052 "num_base_bdevs_discovered": 1, 00:20:45.052 "num_base_bdevs_operational": 3, 00:20:45.052 "base_bdevs_list": [ 00:20:45.052 { 00:20:45.052 "name": "BaseBdev1", 00:20:45.052 "uuid": "7432307a-7314-44a6-b81b-e0f3aa2823e4", 00:20:45.052 "is_configured": true, 00:20:45.052 "data_offset": 0, 00:20:45.052 "data_size": 65536 00:20:45.052 }, 00:20:45.052 { 00:20:45.052 "name": null, 00:20:45.052 "uuid": "e61e72ab-8ca1-49b7-89d5-8d57562f25d6", 00:20:45.052 "is_configured": false, 00:20:45.052 "data_offset": 0, 00:20:45.052 "data_size": 65536 00:20:45.052 }, 00:20:45.052 { 00:20:45.052 "name": null, 00:20:45.052 "uuid": "e844fb3e-2a13-4389-81e0-3f0efd3b2779", 00:20:45.052 "is_configured": false, 00:20:45.052 "data_offset": 0, 00:20:45.052 "data_size": 65536 00:20:45.052 } 00:20:45.052 ] 00:20:45.052 }' 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.052 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.649 [2024-11-06 09:13:21.978404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.649 09:13:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.649 09:13:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.649 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.649 "name": "Existed_Raid", 00:20:45.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.649 "strip_size_kb": 64, 00:20:45.649 "state": "configuring", 00:20:45.649 "raid_level": "raid5f", 00:20:45.649 "superblock": false, 00:20:45.649 "num_base_bdevs": 3, 00:20:45.649 "num_base_bdevs_discovered": 2, 00:20:45.649 "num_base_bdevs_operational": 3, 00:20:45.649 "base_bdevs_list": [ 00:20:45.649 { 00:20:45.649 "name": "BaseBdev1", 00:20:45.649 "uuid": "7432307a-7314-44a6-b81b-e0f3aa2823e4", 00:20:45.649 "is_configured": true, 00:20:45.649 "data_offset": 0, 00:20:45.649 "data_size": 65536 00:20:45.649 }, 00:20:45.649 { 00:20:45.649 "name": null, 00:20:45.649 "uuid": "e61e72ab-8ca1-49b7-89d5-8d57562f25d6", 00:20:45.649 "is_configured": false, 00:20:45.649 "data_offset": 0, 00:20:45.649 "data_size": 65536 00:20:45.649 }, 00:20:45.649 { 00:20:45.649 "name": "BaseBdev3", 00:20:45.649 "uuid": "e844fb3e-2a13-4389-81e0-3f0efd3b2779", 00:20:45.649 "is_configured": true, 00:20:45.649 "data_offset": 0, 00:20:45.649 "data_size": 65536 00:20:45.649 } 00:20:45.649 ] 00:20:45.649 }' 00:20:45.649 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.649 09:13:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.221 [2024-11-06 09:13:22.554550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.221 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.221 "name": "Existed_Raid", 00:20:46.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.221 "strip_size_kb": 64, 00:20:46.221 "state": "configuring", 00:20:46.221 "raid_level": "raid5f", 00:20:46.221 "superblock": false, 00:20:46.221 "num_base_bdevs": 3, 00:20:46.221 "num_base_bdevs_discovered": 1, 00:20:46.221 "num_base_bdevs_operational": 3, 00:20:46.221 "base_bdevs_list": [ 00:20:46.221 { 00:20:46.221 "name": null, 00:20:46.221 "uuid": "7432307a-7314-44a6-b81b-e0f3aa2823e4", 00:20:46.221 "is_configured": false, 00:20:46.221 "data_offset": 0, 00:20:46.221 "data_size": 65536 00:20:46.221 }, 00:20:46.221 { 00:20:46.221 "name": null, 00:20:46.221 "uuid": "e61e72ab-8ca1-49b7-89d5-8d57562f25d6", 00:20:46.221 "is_configured": false, 00:20:46.221 "data_offset": 0, 00:20:46.221 "data_size": 65536 00:20:46.221 }, 00:20:46.221 { 00:20:46.221 "name": "BaseBdev3", 00:20:46.221 "uuid": "e844fb3e-2a13-4389-81e0-3f0efd3b2779", 00:20:46.222 "is_configured": true, 00:20:46.222 "data_offset": 0, 00:20:46.222 "data_size": 65536 00:20:46.222 } 00:20:46.222 ] 00:20:46.222 }' 00:20:46.222 09:13:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.222 09:13:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.789 [2024-11-06 09:13:23.200543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:46.789 "name": "Existed_Raid", 00:20:46.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.789 "strip_size_kb": 64, 00:20:46.789 "state": "configuring", 00:20:46.789 "raid_level": "raid5f", 00:20:46.789 "superblock": false, 00:20:46.789 "num_base_bdevs": 3, 00:20:46.789 "num_base_bdevs_discovered": 2, 00:20:46.789 "num_base_bdevs_operational": 3, 00:20:46.789 "base_bdevs_list": [ 00:20:46.789 { 00:20:46.789 "name": null, 00:20:46.789 "uuid": "7432307a-7314-44a6-b81b-e0f3aa2823e4", 00:20:46.789 "is_configured": false, 00:20:46.789 "data_offset": 0, 00:20:46.789 "data_size": 65536 00:20:46.789 }, 00:20:46.789 { 00:20:46.789 "name": "BaseBdev2", 00:20:46.789 "uuid": "e61e72ab-8ca1-49b7-89d5-8d57562f25d6", 00:20:46.789 "is_configured": true, 00:20:46.789 "data_offset": 0, 00:20:46.789 "data_size": 65536 00:20:46.789 }, 00:20:46.789 { 00:20:46.789 "name": "BaseBdev3", 00:20:46.789 "uuid": "e844fb3e-2a13-4389-81e0-3f0efd3b2779", 00:20:46.789 "is_configured": true, 00:20:46.789 "data_offset": 0, 00:20:46.789 "data_size": 65536 00:20:46.789 } 00:20:46.789 ] 00:20:46.789 }' 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:46.789 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7432307a-7314-44a6-b81b-e0f3aa2823e4 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.357 [2024-11-06 09:13:23.827430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:47.357 [2024-11-06 09:13:23.827673] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:47.357 [2024-11-06 09:13:23.827735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:20:47.357 [2024-11-06 09:13:23.828184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:47.357 [2024-11-06 09:13:23.833252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:47.357 [2024-11-06 09:13:23.833401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:47.357 [2024-11-06 09:13:23.833909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.357 NewBaseBdev 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.357 [ 00:20:47.357 { 00:20:47.357 "name": "NewBaseBdev", 00:20:47.357 "aliases": [ 00:20:47.357 "7432307a-7314-44a6-b81b-e0f3aa2823e4" 00:20:47.357 ], 00:20:47.357 "product_name": "Malloc disk", 00:20:47.357 "block_size": 512, 00:20:47.357 "num_blocks": 65536, 00:20:47.357 "uuid": "7432307a-7314-44a6-b81b-e0f3aa2823e4", 00:20:47.357 "assigned_rate_limits": { 00:20:47.357 "rw_ios_per_sec": 0, 00:20:47.357 "rw_mbytes_per_sec": 0, 00:20:47.357 "r_mbytes_per_sec": 0, 00:20:47.357 "w_mbytes_per_sec": 0 00:20:47.357 }, 00:20:47.357 "claimed": true, 00:20:47.357 "claim_type": "exclusive_write", 00:20:47.357 "zoned": false, 00:20:47.357 "supported_io_types": { 00:20:47.357 "read": true, 00:20:47.357 "write": true, 00:20:47.357 "unmap": true, 00:20:47.357 "flush": true, 00:20:47.357 "reset": true, 00:20:47.357 "nvme_admin": false, 00:20:47.357 "nvme_io": false, 00:20:47.357 "nvme_io_md": false, 00:20:47.357 "write_zeroes": true, 00:20:47.357 "zcopy": true, 00:20:47.357 "get_zone_info": false, 00:20:47.357 "zone_management": false, 00:20:47.357 "zone_append": false, 00:20:47.357 "compare": false, 00:20:47.357 "compare_and_write": false, 00:20:47.357 "abort": true, 00:20:47.357 "seek_hole": false, 00:20:47.357 "seek_data": false, 00:20:47.357 "copy": true, 00:20:47.357 "nvme_iov_md": false 00:20:47.357 }, 00:20:47.357 "memory_domains": [ 00:20:47.357 { 00:20:47.357 "dma_device_id": "system", 00:20:47.357 "dma_device_type": 1 00:20:47.357 }, 00:20:47.357 { 00:20:47.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.357 "dma_device_type": 2 00:20:47.357 } 00:20:47.357 ], 00:20:47.357 "driver_specific": {} 00:20:47.357 } 00:20:47.357 ] 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:47.357 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.358 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.358 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.358 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.358 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.358 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.616 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.616 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.616 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.616 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.616 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.616 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.617 "name": "Existed_Raid", 00:20:47.617 "uuid": "f206f83c-fc91-43e8-816c-ffde9e33645c", 00:20:47.617 "strip_size_kb": 64, 00:20:47.617 "state": "online", 00:20:47.617 "raid_level": "raid5f", 00:20:47.617 "superblock": false, 00:20:47.617 "num_base_bdevs": 3, 00:20:47.617 "num_base_bdevs_discovered": 3, 00:20:47.617 "num_base_bdevs_operational": 3, 00:20:47.617 "base_bdevs_list": [ 00:20:47.617 { 00:20:47.617 "name": "NewBaseBdev", 00:20:47.617 "uuid": "7432307a-7314-44a6-b81b-e0f3aa2823e4", 00:20:47.617 "is_configured": true, 00:20:47.617 "data_offset": 0, 00:20:47.617 "data_size": 65536 00:20:47.617 }, 00:20:47.617 { 00:20:47.617 "name": "BaseBdev2", 00:20:47.617 "uuid": "e61e72ab-8ca1-49b7-89d5-8d57562f25d6", 00:20:47.617 "is_configured": true, 00:20:47.617 "data_offset": 0, 00:20:47.617 "data_size": 65536 00:20:47.617 }, 00:20:47.617 { 00:20:47.617 "name": "BaseBdev3", 00:20:47.617 "uuid": "e844fb3e-2a13-4389-81e0-3f0efd3b2779", 00:20:47.617 "is_configured": true, 00:20:47.617 "data_offset": 0, 00:20:47.617 "data_size": 65536 00:20:47.617 } 00:20:47.617 ] 00:20:47.617 }' 00:20:47.617 09:13:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.617 09:13:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.875 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:47.875 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:47.875 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:47.875 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:47.875 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:47.875 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:48.133 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:48.133 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:48.133 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.133 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.134 [2024-11-06 09:13:24.416052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:48.134 "name": "Existed_Raid", 00:20:48.134 "aliases": [ 00:20:48.134 "f206f83c-fc91-43e8-816c-ffde9e33645c" 00:20:48.134 ], 00:20:48.134 "product_name": "Raid Volume", 00:20:48.134 "block_size": 512, 00:20:48.134 "num_blocks": 131072, 00:20:48.134 "uuid": "f206f83c-fc91-43e8-816c-ffde9e33645c", 00:20:48.134 "assigned_rate_limits": { 00:20:48.134 "rw_ios_per_sec": 0, 00:20:48.134 "rw_mbytes_per_sec": 0, 00:20:48.134 "r_mbytes_per_sec": 0, 00:20:48.134 "w_mbytes_per_sec": 0 00:20:48.134 }, 00:20:48.134 "claimed": false, 00:20:48.134 "zoned": false, 00:20:48.134 "supported_io_types": { 00:20:48.134 "read": true, 00:20:48.134 "write": true, 00:20:48.134 "unmap": false, 00:20:48.134 "flush": false, 00:20:48.134 "reset": true, 00:20:48.134 "nvme_admin": false, 00:20:48.134 "nvme_io": false, 00:20:48.134 "nvme_io_md": false, 00:20:48.134 "write_zeroes": true, 00:20:48.134 "zcopy": false, 00:20:48.134 "get_zone_info": false, 00:20:48.134 "zone_management": false, 00:20:48.134 "zone_append": false, 00:20:48.134 "compare": false, 00:20:48.134 "compare_and_write": false, 00:20:48.134 "abort": false, 00:20:48.134 "seek_hole": false, 00:20:48.134 "seek_data": false, 00:20:48.134 "copy": false, 00:20:48.134 "nvme_iov_md": false 00:20:48.134 }, 00:20:48.134 "driver_specific": { 00:20:48.134 "raid": { 00:20:48.134 "uuid": "f206f83c-fc91-43e8-816c-ffde9e33645c", 00:20:48.134 "strip_size_kb": 64, 00:20:48.134 "state": "online", 00:20:48.134 "raid_level": "raid5f", 00:20:48.134 "superblock": false, 00:20:48.134 "num_base_bdevs": 3, 00:20:48.134 "num_base_bdevs_discovered": 3, 00:20:48.134 "num_base_bdevs_operational": 3, 00:20:48.134 "base_bdevs_list": [ 00:20:48.134 { 00:20:48.134 "name": "NewBaseBdev", 00:20:48.134 "uuid": "7432307a-7314-44a6-b81b-e0f3aa2823e4", 00:20:48.134 "is_configured": true, 00:20:48.134 "data_offset": 0, 00:20:48.134 "data_size": 65536 00:20:48.134 }, 00:20:48.134 { 00:20:48.134 "name": "BaseBdev2", 00:20:48.134 "uuid": "e61e72ab-8ca1-49b7-89d5-8d57562f25d6", 00:20:48.134 "is_configured": true, 00:20:48.134 "data_offset": 0, 00:20:48.134 "data_size": 65536 00:20:48.134 }, 00:20:48.134 { 00:20:48.134 "name": "BaseBdev3", 00:20:48.134 "uuid": "e844fb3e-2a13-4389-81e0-3f0efd3b2779", 00:20:48.134 "is_configured": true, 00:20:48.134 "data_offset": 0, 00:20:48.134 "data_size": 65536 00:20:48.134 } 00:20:48.134 ] 00:20:48.134 } 00:20:48.134 } 00:20:48.134 }' 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:48.134 BaseBdev2 00:20:48.134 BaseBdev3' 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:48.134 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.393 [2024-11-06 09:13:24.727882] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:48.393 [2024-11-06 09:13:24.727917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:48.393 [2024-11-06 09:13:24.728019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:48.393 [2024-11-06 09:13:24.728368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:48.393 [2024-11-06 09:13:24.728398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80135 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 80135 ']' 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 80135 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80135 00:20:48.393 killing process with pid 80135 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80135' 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 80135 00:20:48.393 [2024-11-06 09:13:24.766910] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:48.393 09:13:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 80135 00:20:48.654 [2024-11-06 09:13:25.037305] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:49.589 09:13:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:49.589 00:20:49.589 real 0m11.919s 00:20:49.589 user 0m19.755s 00:20:49.589 sys 0m1.624s 00:20:49.589 09:13:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:49.589 09:13:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.590 ************************************ 00:20:49.590 END TEST raid5f_state_function_test 00:20:49.590 ************************************ 00:20:49.849 09:13:26 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:20:49.849 09:13:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:20:49.849 09:13:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:49.849 09:13:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:49.849 ************************************ 00:20:49.849 START TEST raid5f_state_function_test_sb 00:20:49.849 ************************************ 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:49.849 Process raid pid: 80768 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80768 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80768' 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80768 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80768 ']' 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:49.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:49.849 09:13:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:49.849 [2024-11-06 09:13:26.248016] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:20:49.849 [2024-11-06 09:13:26.248174] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.108 [2024-11-06 09:13:26.420001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.108 [2024-11-06 09:13:26.553902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.370 [2024-11-06 09:13:26.765479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.370 [2024-11-06 09:13:26.765533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.936 [2024-11-06 09:13:27.231523] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:50.936 [2024-11-06 09:13:27.231756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:50.936 [2024-11-06 09:13:27.231786] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:50.936 [2024-11-06 09:13:27.231806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:50.936 [2024-11-06 09:13:27.231837] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:50.936 [2024-11-06 09:13:27.231855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.936 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.936 "name": "Existed_Raid", 00:20:50.936 "uuid": "fbb70bf9-5918-4bb7-a0d3-34a46acec5d3", 00:20:50.937 "strip_size_kb": 64, 00:20:50.937 "state": "configuring", 00:20:50.937 "raid_level": "raid5f", 00:20:50.937 "superblock": true, 00:20:50.937 "num_base_bdevs": 3, 00:20:50.937 "num_base_bdevs_discovered": 0, 00:20:50.937 "num_base_bdevs_operational": 3, 00:20:50.937 "base_bdevs_list": [ 00:20:50.937 { 00:20:50.937 "name": "BaseBdev1", 00:20:50.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.937 "is_configured": false, 00:20:50.937 "data_offset": 0, 00:20:50.937 "data_size": 0 00:20:50.937 }, 00:20:50.937 { 00:20:50.937 "name": "BaseBdev2", 00:20:50.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.937 "is_configured": false, 00:20:50.937 "data_offset": 0, 00:20:50.937 "data_size": 0 00:20:50.937 }, 00:20:50.937 { 00:20:50.937 "name": "BaseBdev3", 00:20:50.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.937 "is_configured": false, 00:20:50.937 "data_offset": 0, 00:20:50.937 "data_size": 0 00:20:50.937 } 00:20:50.937 ] 00:20:50.937 }' 00:20:50.937 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.937 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.503 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:51.503 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.503 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.503 [2024-11-06 09:13:27.751565] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:51.503 [2024-11-06 09:13:27.751613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:51.503 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.503 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:51.503 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.503 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.503 [2024-11-06 09:13:27.759572] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:51.503 [2024-11-06 09:13:27.759626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:51.503 [2024-11-06 09:13:27.759643] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:51.503 [2024-11-06 09:13:27.759658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:51.503 [2024-11-06 09:13:27.759668] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:51.503 [2024-11-06 09:13:27.759683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:51.503 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.503 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:51.503 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.503 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.503 [2024-11-06 09:13:27.804804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:51.503 BaseBdev1 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.504 [ 00:20:51.504 { 00:20:51.504 "name": "BaseBdev1", 00:20:51.504 "aliases": [ 00:20:51.504 "9759217d-2622-464d-bb5a-b2b96c3090ed" 00:20:51.504 ], 00:20:51.504 "product_name": "Malloc disk", 00:20:51.504 "block_size": 512, 00:20:51.504 "num_blocks": 65536, 00:20:51.504 "uuid": "9759217d-2622-464d-bb5a-b2b96c3090ed", 00:20:51.504 "assigned_rate_limits": { 00:20:51.504 "rw_ios_per_sec": 0, 00:20:51.504 "rw_mbytes_per_sec": 0, 00:20:51.504 "r_mbytes_per_sec": 0, 00:20:51.504 "w_mbytes_per_sec": 0 00:20:51.504 }, 00:20:51.504 "claimed": true, 00:20:51.504 "claim_type": "exclusive_write", 00:20:51.504 "zoned": false, 00:20:51.504 "supported_io_types": { 00:20:51.504 "read": true, 00:20:51.504 "write": true, 00:20:51.504 "unmap": true, 00:20:51.504 "flush": true, 00:20:51.504 "reset": true, 00:20:51.504 "nvme_admin": false, 00:20:51.504 "nvme_io": false, 00:20:51.504 "nvme_io_md": false, 00:20:51.504 "write_zeroes": true, 00:20:51.504 "zcopy": true, 00:20:51.504 "get_zone_info": false, 00:20:51.504 "zone_management": false, 00:20:51.504 "zone_append": false, 00:20:51.504 "compare": false, 00:20:51.504 "compare_and_write": false, 00:20:51.504 "abort": true, 00:20:51.504 "seek_hole": false, 00:20:51.504 "seek_data": false, 00:20:51.504 "copy": true, 00:20:51.504 "nvme_iov_md": false 00:20:51.504 }, 00:20:51.504 "memory_domains": [ 00:20:51.504 { 00:20:51.504 "dma_device_id": "system", 00:20:51.504 "dma_device_type": 1 00:20:51.504 }, 00:20:51.504 { 00:20:51.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.504 "dma_device_type": 2 00:20:51.504 } 00:20:51.504 ], 00:20:51.504 "driver_specific": {} 00:20:51.504 } 00:20:51.504 ] 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.504 "name": "Existed_Raid", 00:20:51.504 "uuid": "ce181adc-7db0-415a-b78b-409549f6c4ae", 00:20:51.504 "strip_size_kb": 64, 00:20:51.504 "state": "configuring", 00:20:51.504 "raid_level": "raid5f", 00:20:51.504 "superblock": true, 00:20:51.504 "num_base_bdevs": 3, 00:20:51.504 "num_base_bdevs_discovered": 1, 00:20:51.504 "num_base_bdevs_operational": 3, 00:20:51.504 "base_bdevs_list": [ 00:20:51.504 { 00:20:51.504 "name": "BaseBdev1", 00:20:51.504 "uuid": "9759217d-2622-464d-bb5a-b2b96c3090ed", 00:20:51.504 "is_configured": true, 00:20:51.504 "data_offset": 2048, 00:20:51.504 "data_size": 63488 00:20:51.504 }, 00:20:51.504 { 00:20:51.504 "name": "BaseBdev2", 00:20:51.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.504 "is_configured": false, 00:20:51.504 "data_offset": 0, 00:20:51.504 "data_size": 0 00:20:51.504 }, 00:20:51.504 { 00:20:51.504 "name": "BaseBdev3", 00:20:51.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.504 "is_configured": false, 00:20:51.504 "data_offset": 0, 00:20:51.504 "data_size": 0 00:20:51.504 } 00:20:51.504 ] 00:20:51.504 }' 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.504 09:13:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.763 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:51.763 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.763 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.763 [2024-11-06 09:13:28.297018] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:51.763 [2024-11-06 09:13:28.297085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.022 [2024-11-06 09:13:28.305063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:52.022 [2024-11-06 09:13:28.307577] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:52.022 [2024-11-06 09:13:28.307663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:52.022 [2024-11-06 09:13:28.307680] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:52.022 [2024-11-06 09:13:28.307697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.022 "name": "Existed_Raid", 00:20:52.022 "uuid": "0661ade5-3a6b-4e4d-b327-78efecb4cd2c", 00:20:52.022 "strip_size_kb": 64, 00:20:52.022 "state": "configuring", 00:20:52.022 "raid_level": "raid5f", 00:20:52.022 "superblock": true, 00:20:52.022 "num_base_bdevs": 3, 00:20:52.022 "num_base_bdevs_discovered": 1, 00:20:52.022 "num_base_bdevs_operational": 3, 00:20:52.022 "base_bdevs_list": [ 00:20:52.022 { 00:20:52.022 "name": "BaseBdev1", 00:20:52.022 "uuid": "9759217d-2622-464d-bb5a-b2b96c3090ed", 00:20:52.022 "is_configured": true, 00:20:52.022 "data_offset": 2048, 00:20:52.022 "data_size": 63488 00:20:52.022 }, 00:20:52.022 { 00:20:52.022 "name": "BaseBdev2", 00:20:52.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.022 "is_configured": false, 00:20:52.022 "data_offset": 0, 00:20:52.022 "data_size": 0 00:20:52.022 }, 00:20:52.022 { 00:20:52.022 "name": "BaseBdev3", 00:20:52.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.022 "is_configured": false, 00:20:52.022 "data_offset": 0, 00:20:52.022 "data_size": 0 00:20:52.022 } 00:20:52.022 ] 00:20:52.022 }' 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.022 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.588 [2024-11-06 09:13:28.864001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:52.588 BaseBdev2 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.588 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.588 [ 00:20:52.588 { 00:20:52.588 "name": "BaseBdev2", 00:20:52.588 "aliases": [ 00:20:52.588 "5a51ff09-1b85-482f-9850-49ccc0cf553e" 00:20:52.588 ], 00:20:52.588 "product_name": "Malloc disk", 00:20:52.588 "block_size": 512, 00:20:52.588 "num_blocks": 65536, 00:20:52.588 "uuid": "5a51ff09-1b85-482f-9850-49ccc0cf553e", 00:20:52.588 "assigned_rate_limits": { 00:20:52.588 "rw_ios_per_sec": 0, 00:20:52.588 "rw_mbytes_per_sec": 0, 00:20:52.588 "r_mbytes_per_sec": 0, 00:20:52.588 "w_mbytes_per_sec": 0 00:20:52.588 }, 00:20:52.588 "claimed": true, 00:20:52.588 "claim_type": "exclusive_write", 00:20:52.588 "zoned": false, 00:20:52.588 "supported_io_types": { 00:20:52.588 "read": true, 00:20:52.589 "write": true, 00:20:52.589 "unmap": true, 00:20:52.589 "flush": true, 00:20:52.589 "reset": true, 00:20:52.589 "nvme_admin": false, 00:20:52.589 "nvme_io": false, 00:20:52.589 "nvme_io_md": false, 00:20:52.589 "write_zeroes": true, 00:20:52.589 "zcopy": true, 00:20:52.589 "get_zone_info": false, 00:20:52.589 "zone_management": false, 00:20:52.589 "zone_append": false, 00:20:52.589 "compare": false, 00:20:52.589 "compare_and_write": false, 00:20:52.589 "abort": true, 00:20:52.589 "seek_hole": false, 00:20:52.589 "seek_data": false, 00:20:52.589 "copy": true, 00:20:52.589 "nvme_iov_md": false 00:20:52.589 }, 00:20:52.589 "memory_domains": [ 00:20:52.589 { 00:20:52.589 "dma_device_id": "system", 00:20:52.589 "dma_device_type": 1 00:20:52.589 }, 00:20:52.589 { 00:20:52.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.589 "dma_device_type": 2 00:20:52.589 } 00:20:52.589 ], 00:20:52.589 "driver_specific": {} 00:20:52.589 } 00:20:52.589 ] 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.589 "name": "Existed_Raid", 00:20:52.589 "uuid": "0661ade5-3a6b-4e4d-b327-78efecb4cd2c", 00:20:52.589 "strip_size_kb": 64, 00:20:52.589 "state": "configuring", 00:20:52.589 "raid_level": "raid5f", 00:20:52.589 "superblock": true, 00:20:52.589 "num_base_bdevs": 3, 00:20:52.589 "num_base_bdevs_discovered": 2, 00:20:52.589 "num_base_bdevs_operational": 3, 00:20:52.589 "base_bdevs_list": [ 00:20:52.589 { 00:20:52.589 "name": "BaseBdev1", 00:20:52.589 "uuid": "9759217d-2622-464d-bb5a-b2b96c3090ed", 00:20:52.589 "is_configured": true, 00:20:52.589 "data_offset": 2048, 00:20:52.589 "data_size": 63488 00:20:52.589 }, 00:20:52.589 { 00:20:52.589 "name": "BaseBdev2", 00:20:52.589 "uuid": "5a51ff09-1b85-482f-9850-49ccc0cf553e", 00:20:52.589 "is_configured": true, 00:20:52.589 "data_offset": 2048, 00:20:52.589 "data_size": 63488 00:20:52.589 }, 00:20:52.589 { 00:20:52.589 "name": "BaseBdev3", 00:20:52.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.589 "is_configured": false, 00:20:52.589 "data_offset": 0, 00:20:52.589 "data_size": 0 00:20:52.589 } 00:20:52.589 ] 00:20:52.589 }' 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.589 09:13:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.155 [2024-11-06 09:13:29.459085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:53.155 [2024-11-06 09:13:29.459403] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:53.155 [2024-11-06 09:13:29.459436] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:53.155 BaseBdev3 00:20:53.155 [2024-11-06 09:13:29.459764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.155 [2024-11-06 09:13:29.465056] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:53.155 [2024-11-06 09:13:29.465087] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:53.155 [2024-11-06 09:13:29.465410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.155 [ 00:20:53.155 { 00:20:53.155 "name": "BaseBdev3", 00:20:53.155 "aliases": [ 00:20:53.155 "47c6ee54-699e-4b4c-aa13-d703a97b222e" 00:20:53.155 ], 00:20:53.155 "product_name": "Malloc disk", 00:20:53.155 "block_size": 512, 00:20:53.155 "num_blocks": 65536, 00:20:53.155 "uuid": "47c6ee54-699e-4b4c-aa13-d703a97b222e", 00:20:53.155 "assigned_rate_limits": { 00:20:53.155 "rw_ios_per_sec": 0, 00:20:53.155 "rw_mbytes_per_sec": 0, 00:20:53.155 "r_mbytes_per_sec": 0, 00:20:53.155 "w_mbytes_per_sec": 0 00:20:53.155 }, 00:20:53.155 "claimed": true, 00:20:53.155 "claim_type": "exclusive_write", 00:20:53.155 "zoned": false, 00:20:53.155 "supported_io_types": { 00:20:53.155 "read": true, 00:20:53.155 "write": true, 00:20:53.155 "unmap": true, 00:20:53.155 "flush": true, 00:20:53.155 "reset": true, 00:20:53.155 "nvme_admin": false, 00:20:53.155 "nvme_io": false, 00:20:53.155 "nvme_io_md": false, 00:20:53.155 "write_zeroes": true, 00:20:53.155 "zcopy": true, 00:20:53.155 "get_zone_info": false, 00:20:53.155 "zone_management": false, 00:20:53.155 "zone_append": false, 00:20:53.155 "compare": false, 00:20:53.155 "compare_and_write": false, 00:20:53.155 "abort": true, 00:20:53.155 "seek_hole": false, 00:20:53.155 "seek_data": false, 00:20:53.155 "copy": true, 00:20:53.155 "nvme_iov_md": false 00:20:53.155 }, 00:20:53.155 "memory_domains": [ 00:20:53.155 { 00:20:53.155 "dma_device_id": "system", 00:20:53.155 "dma_device_type": 1 00:20:53.155 }, 00:20:53.155 { 00:20:53.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.155 "dma_device_type": 2 00:20:53.155 } 00:20:53.155 ], 00:20:53.155 "driver_specific": {} 00:20:53.155 } 00:20:53.155 ] 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:53.155 "name": "Existed_Raid", 00:20:53.155 "uuid": "0661ade5-3a6b-4e4d-b327-78efecb4cd2c", 00:20:53.155 "strip_size_kb": 64, 00:20:53.155 "state": "online", 00:20:53.155 "raid_level": "raid5f", 00:20:53.155 "superblock": true, 00:20:53.155 "num_base_bdevs": 3, 00:20:53.155 "num_base_bdevs_discovered": 3, 00:20:53.155 "num_base_bdevs_operational": 3, 00:20:53.155 "base_bdevs_list": [ 00:20:53.155 { 00:20:53.155 "name": "BaseBdev1", 00:20:53.155 "uuid": "9759217d-2622-464d-bb5a-b2b96c3090ed", 00:20:53.155 "is_configured": true, 00:20:53.155 "data_offset": 2048, 00:20:53.155 "data_size": 63488 00:20:53.155 }, 00:20:53.155 { 00:20:53.155 "name": "BaseBdev2", 00:20:53.155 "uuid": "5a51ff09-1b85-482f-9850-49ccc0cf553e", 00:20:53.155 "is_configured": true, 00:20:53.155 "data_offset": 2048, 00:20:53.155 "data_size": 63488 00:20:53.155 }, 00:20:53.155 { 00:20:53.155 "name": "BaseBdev3", 00:20:53.155 "uuid": "47c6ee54-699e-4b4c-aa13-d703a97b222e", 00:20:53.155 "is_configured": true, 00:20:53.155 "data_offset": 2048, 00:20:53.155 "data_size": 63488 00:20:53.155 } 00:20:53.155 ] 00:20:53.155 }' 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:53.155 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.487 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:53.487 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:53.487 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:53.487 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:53.487 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:53.487 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:53.487 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:53.487 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.487 09:13:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.487 09:13:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:53.487 [2024-11-06 09:13:30.003468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:53.487 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:53.747 "name": "Existed_Raid", 00:20:53.747 "aliases": [ 00:20:53.747 "0661ade5-3a6b-4e4d-b327-78efecb4cd2c" 00:20:53.747 ], 00:20:53.747 "product_name": "Raid Volume", 00:20:53.747 "block_size": 512, 00:20:53.747 "num_blocks": 126976, 00:20:53.747 "uuid": "0661ade5-3a6b-4e4d-b327-78efecb4cd2c", 00:20:53.747 "assigned_rate_limits": { 00:20:53.747 "rw_ios_per_sec": 0, 00:20:53.747 "rw_mbytes_per_sec": 0, 00:20:53.747 "r_mbytes_per_sec": 0, 00:20:53.747 "w_mbytes_per_sec": 0 00:20:53.747 }, 00:20:53.747 "claimed": false, 00:20:53.747 "zoned": false, 00:20:53.747 "supported_io_types": { 00:20:53.747 "read": true, 00:20:53.747 "write": true, 00:20:53.747 "unmap": false, 00:20:53.747 "flush": false, 00:20:53.747 "reset": true, 00:20:53.747 "nvme_admin": false, 00:20:53.747 "nvme_io": false, 00:20:53.747 "nvme_io_md": false, 00:20:53.747 "write_zeroes": true, 00:20:53.747 "zcopy": false, 00:20:53.747 "get_zone_info": false, 00:20:53.747 "zone_management": false, 00:20:53.747 "zone_append": false, 00:20:53.747 "compare": false, 00:20:53.747 "compare_and_write": false, 00:20:53.747 "abort": false, 00:20:53.747 "seek_hole": false, 00:20:53.747 "seek_data": false, 00:20:53.747 "copy": false, 00:20:53.747 "nvme_iov_md": false 00:20:53.747 }, 00:20:53.747 "driver_specific": { 00:20:53.747 "raid": { 00:20:53.747 "uuid": "0661ade5-3a6b-4e4d-b327-78efecb4cd2c", 00:20:53.747 "strip_size_kb": 64, 00:20:53.747 "state": "online", 00:20:53.747 "raid_level": "raid5f", 00:20:53.747 "superblock": true, 00:20:53.747 "num_base_bdevs": 3, 00:20:53.747 "num_base_bdevs_discovered": 3, 00:20:53.747 "num_base_bdevs_operational": 3, 00:20:53.747 "base_bdevs_list": [ 00:20:53.747 { 00:20:53.747 "name": "BaseBdev1", 00:20:53.747 "uuid": "9759217d-2622-464d-bb5a-b2b96c3090ed", 00:20:53.747 "is_configured": true, 00:20:53.747 "data_offset": 2048, 00:20:53.747 "data_size": 63488 00:20:53.747 }, 00:20:53.747 { 00:20:53.747 "name": "BaseBdev2", 00:20:53.747 "uuid": "5a51ff09-1b85-482f-9850-49ccc0cf553e", 00:20:53.747 "is_configured": true, 00:20:53.747 "data_offset": 2048, 00:20:53.747 "data_size": 63488 00:20:53.747 }, 00:20:53.747 { 00:20:53.747 "name": "BaseBdev3", 00:20:53.747 "uuid": "47c6ee54-699e-4b4c-aa13-d703a97b222e", 00:20:53.747 "is_configured": true, 00:20:53.747 "data_offset": 2048, 00:20:53.747 "data_size": 63488 00:20:53.747 } 00:20:53.747 ] 00:20:53.747 } 00:20:53.747 } 00:20:53.747 }' 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:53.747 BaseBdev2 00:20:53.747 BaseBdev3' 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.747 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.006 [2024-11-06 09:13:30.299386] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.006 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:54.006 "name": "Existed_Raid", 00:20:54.006 "uuid": "0661ade5-3a6b-4e4d-b327-78efecb4cd2c", 00:20:54.006 "strip_size_kb": 64, 00:20:54.006 "state": "online", 00:20:54.006 "raid_level": "raid5f", 00:20:54.006 "superblock": true, 00:20:54.006 "num_base_bdevs": 3, 00:20:54.006 "num_base_bdevs_discovered": 2, 00:20:54.006 "num_base_bdevs_operational": 2, 00:20:54.006 "base_bdevs_list": [ 00:20:54.006 { 00:20:54.006 "name": null, 00:20:54.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.006 "is_configured": false, 00:20:54.006 "data_offset": 0, 00:20:54.006 "data_size": 63488 00:20:54.006 }, 00:20:54.006 { 00:20:54.006 "name": "BaseBdev2", 00:20:54.006 "uuid": "5a51ff09-1b85-482f-9850-49ccc0cf553e", 00:20:54.006 "is_configured": true, 00:20:54.007 "data_offset": 2048, 00:20:54.007 "data_size": 63488 00:20:54.007 }, 00:20:54.007 { 00:20:54.007 "name": "BaseBdev3", 00:20:54.007 "uuid": "47c6ee54-699e-4b4c-aa13-d703a97b222e", 00:20:54.007 "is_configured": true, 00:20:54.007 "data_offset": 2048, 00:20:54.007 "data_size": 63488 00:20:54.007 } 00:20:54.007 ] 00:20:54.007 }' 00:20:54.007 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:54.007 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.574 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:54.574 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:54.574 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.574 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:54.574 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.574 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.574 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.574 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:54.574 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:54.574 09:13:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:54.574 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.574 09:13:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.574 [2024-11-06 09:13:30.949203] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:54.574 [2024-11-06 09:13:30.949526] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:54.574 [2024-11-06 09:13:31.036409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.574 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.574 [2024-11-06 09:13:31.096444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:54.574 [2024-11-06 09:13:31.096657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.834 BaseBdev2 00:20:54.834 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 [ 00:20:54.835 { 00:20:54.835 "name": "BaseBdev2", 00:20:54.835 "aliases": [ 00:20:54.835 "961d11ba-19aa-449a-9bc9-b593e9b0d6c8" 00:20:54.835 ], 00:20:54.835 "product_name": "Malloc disk", 00:20:54.835 "block_size": 512, 00:20:54.835 "num_blocks": 65536, 00:20:54.835 "uuid": "961d11ba-19aa-449a-9bc9-b593e9b0d6c8", 00:20:54.835 "assigned_rate_limits": { 00:20:54.835 "rw_ios_per_sec": 0, 00:20:54.835 "rw_mbytes_per_sec": 0, 00:20:54.835 "r_mbytes_per_sec": 0, 00:20:54.835 "w_mbytes_per_sec": 0 00:20:54.835 }, 00:20:54.835 "claimed": false, 00:20:54.835 "zoned": false, 00:20:54.835 "supported_io_types": { 00:20:54.835 "read": true, 00:20:54.835 "write": true, 00:20:54.835 "unmap": true, 00:20:54.835 "flush": true, 00:20:54.835 "reset": true, 00:20:54.835 "nvme_admin": false, 00:20:54.835 "nvme_io": false, 00:20:54.835 "nvme_io_md": false, 00:20:54.835 "write_zeroes": true, 00:20:54.835 "zcopy": true, 00:20:54.835 "get_zone_info": false, 00:20:54.835 "zone_management": false, 00:20:54.835 "zone_append": false, 00:20:54.835 "compare": false, 00:20:54.835 "compare_and_write": false, 00:20:54.835 "abort": true, 00:20:54.835 "seek_hole": false, 00:20:54.835 "seek_data": false, 00:20:54.835 "copy": true, 00:20:54.835 "nvme_iov_md": false 00:20:54.835 }, 00:20:54.835 "memory_domains": [ 00:20:54.835 { 00:20:54.835 "dma_device_id": "system", 00:20:54.835 "dma_device_type": 1 00:20:54.835 }, 00:20:54.835 { 00:20:54.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.835 "dma_device_type": 2 00:20:54.835 } 00:20:54.835 ], 00:20:54.835 "driver_specific": {} 00:20:54.835 } 00:20:54.835 ] 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 BaseBdev3 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.835 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.095 [ 00:20:55.095 { 00:20:55.095 "name": "BaseBdev3", 00:20:55.095 "aliases": [ 00:20:55.095 "85fea538-0ff6-4e8e-9b4c-124d906b997d" 00:20:55.095 ], 00:20:55.095 "product_name": "Malloc disk", 00:20:55.095 "block_size": 512, 00:20:55.095 "num_blocks": 65536, 00:20:55.095 "uuid": "85fea538-0ff6-4e8e-9b4c-124d906b997d", 00:20:55.095 "assigned_rate_limits": { 00:20:55.095 "rw_ios_per_sec": 0, 00:20:55.095 "rw_mbytes_per_sec": 0, 00:20:55.095 "r_mbytes_per_sec": 0, 00:20:55.095 "w_mbytes_per_sec": 0 00:20:55.095 }, 00:20:55.095 "claimed": false, 00:20:55.095 "zoned": false, 00:20:55.095 "supported_io_types": { 00:20:55.095 "read": true, 00:20:55.095 "write": true, 00:20:55.095 "unmap": true, 00:20:55.095 "flush": true, 00:20:55.095 "reset": true, 00:20:55.095 "nvme_admin": false, 00:20:55.095 "nvme_io": false, 00:20:55.095 "nvme_io_md": false, 00:20:55.095 "write_zeroes": true, 00:20:55.095 "zcopy": true, 00:20:55.095 "get_zone_info": false, 00:20:55.095 "zone_management": false, 00:20:55.095 "zone_append": false, 00:20:55.095 "compare": false, 00:20:55.095 "compare_and_write": false, 00:20:55.095 "abort": true, 00:20:55.095 "seek_hole": false, 00:20:55.095 "seek_data": false, 00:20:55.095 "copy": true, 00:20:55.095 "nvme_iov_md": false 00:20:55.095 }, 00:20:55.095 "memory_domains": [ 00:20:55.095 { 00:20:55.095 "dma_device_id": "system", 00:20:55.095 "dma_device_type": 1 00:20:55.095 }, 00:20:55.095 { 00:20:55.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.095 "dma_device_type": 2 00:20:55.095 } 00:20:55.095 ], 00:20:55.095 "driver_specific": {} 00:20:55.095 } 00:20:55.095 ] 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.095 [2024-11-06 09:13:31.394315] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:55.095 [2024-11-06 09:13:31.394567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:55.095 [2024-11-06 09:13:31.394735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.095 [2024-11-06 09:13:31.397276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.095 "name": "Existed_Raid", 00:20:55.095 "uuid": "094564fb-aa6f-43f9-b322-c59d522d9213", 00:20:55.095 "strip_size_kb": 64, 00:20:55.095 "state": "configuring", 00:20:55.095 "raid_level": "raid5f", 00:20:55.095 "superblock": true, 00:20:55.095 "num_base_bdevs": 3, 00:20:55.095 "num_base_bdevs_discovered": 2, 00:20:55.095 "num_base_bdevs_operational": 3, 00:20:55.095 "base_bdevs_list": [ 00:20:55.095 { 00:20:55.095 "name": "BaseBdev1", 00:20:55.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.095 "is_configured": false, 00:20:55.095 "data_offset": 0, 00:20:55.095 "data_size": 0 00:20:55.095 }, 00:20:55.095 { 00:20:55.095 "name": "BaseBdev2", 00:20:55.095 "uuid": "961d11ba-19aa-449a-9bc9-b593e9b0d6c8", 00:20:55.095 "is_configured": true, 00:20:55.095 "data_offset": 2048, 00:20:55.095 "data_size": 63488 00:20:55.095 }, 00:20:55.095 { 00:20:55.095 "name": "BaseBdev3", 00:20:55.095 "uuid": "85fea538-0ff6-4e8e-9b4c-124d906b997d", 00:20:55.095 "is_configured": true, 00:20:55.095 "data_offset": 2048, 00:20:55.095 "data_size": 63488 00:20:55.095 } 00:20:55.095 ] 00:20:55.095 }' 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.095 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.661 [2024-11-06 09:13:31.934517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.661 "name": "Existed_Raid", 00:20:55.661 "uuid": "094564fb-aa6f-43f9-b322-c59d522d9213", 00:20:55.661 "strip_size_kb": 64, 00:20:55.661 "state": "configuring", 00:20:55.661 "raid_level": "raid5f", 00:20:55.661 "superblock": true, 00:20:55.661 "num_base_bdevs": 3, 00:20:55.661 "num_base_bdevs_discovered": 1, 00:20:55.661 "num_base_bdevs_operational": 3, 00:20:55.661 "base_bdevs_list": [ 00:20:55.661 { 00:20:55.661 "name": "BaseBdev1", 00:20:55.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.661 "is_configured": false, 00:20:55.661 "data_offset": 0, 00:20:55.661 "data_size": 0 00:20:55.661 }, 00:20:55.661 { 00:20:55.661 "name": null, 00:20:55.661 "uuid": "961d11ba-19aa-449a-9bc9-b593e9b0d6c8", 00:20:55.661 "is_configured": false, 00:20:55.661 "data_offset": 0, 00:20:55.661 "data_size": 63488 00:20:55.661 }, 00:20:55.661 { 00:20:55.661 "name": "BaseBdev3", 00:20:55.661 "uuid": "85fea538-0ff6-4e8e-9b4c-124d906b997d", 00:20:55.661 "is_configured": true, 00:20:55.661 "data_offset": 2048, 00:20:55.661 "data_size": 63488 00:20:55.661 } 00:20:55.661 ] 00:20:55.661 }' 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.661 09:13:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.919 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.919 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.919 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.919 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:55.919 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.177 [2024-11-06 09:13:32.521046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:56.177 BaseBdev1 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.177 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.177 [ 00:20:56.177 { 00:20:56.177 "name": "BaseBdev1", 00:20:56.177 "aliases": [ 00:20:56.177 "a205c7fa-791f-42b3-b16f-43663029cacc" 00:20:56.177 ], 00:20:56.177 "product_name": "Malloc disk", 00:20:56.177 "block_size": 512, 00:20:56.177 "num_blocks": 65536, 00:20:56.178 "uuid": "a205c7fa-791f-42b3-b16f-43663029cacc", 00:20:56.178 "assigned_rate_limits": { 00:20:56.178 "rw_ios_per_sec": 0, 00:20:56.178 "rw_mbytes_per_sec": 0, 00:20:56.178 "r_mbytes_per_sec": 0, 00:20:56.178 "w_mbytes_per_sec": 0 00:20:56.178 }, 00:20:56.178 "claimed": true, 00:20:56.178 "claim_type": "exclusive_write", 00:20:56.178 "zoned": false, 00:20:56.178 "supported_io_types": { 00:20:56.178 "read": true, 00:20:56.178 "write": true, 00:20:56.178 "unmap": true, 00:20:56.178 "flush": true, 00:20:56.178 "reset": true, 00:20:56.178 "nvme_admin": false, 00:20:56.178 "nvme_io": false, 00:20:56.178 "nvme_io_md": false, 00:20:56.178 "write_zeroes": true, 00:20:56.178 "zcopy": true, 00:20:56.178 "get_zone_info": false, 00:20:56.178 "zone_management": false, 00:20:56.178 "zone_append": false, 00:20:56.178 "compare": false, 00:20:56.178 "compare_and_write": false, 00:20:56.178 "abort": true, 00:20:56.178 "seek_hole": false, 00:20:56.178 "seek_data": false, 00:20:56.178 "copy": true, 00:20:56.178 "nvme_iov_md": false 00:20:56.178 }, 00:20:56.178 "memory_domains": [ 00:20:56.178 { 00:20:56.178 "dma_device_id": "system", 00:20:56.178 "dma_device_type": 1 00:20:56.178 }, 00:20:56.178 { 00:20:56.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.178 "dma_device_type": 2 00:20:56.178 } 00:20:56.178 ], 00:20:56.178 "driver_specific": {} 00:20:56.178 } 00:20:56.178 ] 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.178 "name": "Existed_Raid", 00:20:56.178 "uuid": "094564fb-aa6f-43f9-b322-c59d522d9213", 00:20:56.178 "strip_size_kb": 64, 00:20:56.178 "state": "configuring", 00:20:56.178 "raid_level": "raid5f", 00:20:56.178 "superblock": true, 00:20:56.178 "num_base_bdevs": 3, 00:20:56.178 "num_base_bdevs_discovered": 2, 00:20:56.178 "num_base_bdevs_operational": 3, 00:20:56.178 "base_bdevs_list": [ 00:20:56.178 { 00:20:56.178 "name": "BaseBdev1", 00:20:56.178 "uuid": "a205c7fa-791f-42b3-b16f-43663029cacc", 00:20:56.178 "is_configured": true, 00:20:56.178 "data_offset": 2048, 00:20:56.178 "data_size": 63488 00:20:56.178 }, 00:20:56.178 { 00:20:56.178 "name": null, 00:20:56.178 "uuid": "961d11ba-19aa-449a-9bc9-b593e9b0d6c8", 00:20:56.178 "is_configured": false, 00:20:56.178 "data_offset": 0, 00:20:56.178 "data_size": 63488 00:20:56.178 }, 00:20:56.178 { 00:20:56.178 "name": "BaseBdev3", 00:20:56.178 "uuid": "85fea538-0ff6-4e8e-9b4c-124d906b997d", 00:20:56.178 "is_configured": true, 00:20:56.178 "data_offset": 2048, 00:20:56.178 "data_size": 63488 00:20:56.178 } 00:20:56.178 ] 00:20:56.178 }' 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.178 09:13:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.745 [2024-11-06 09:13:33.141299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.745 "name": "Existed_Raid", 00:20:56.745 "uuid": "094564fb-aa6f-43f9-b322-c59d522d9213", 00:20:56.745 "strip_size_kb": 64, 00:20:56.745 "state": "configuring", 00:20:56.745 "raid_level": "raid5f", 00:20:56.745 "superblock": true, 00:20:56.745 "num_base_bdevs": 3, 00:20:56.745 "num_base_bdevs_discovered": 1, 00:20:56.745 "num_base_bdevs_operational": 3, 00:20:56.745 "base_bdevs_list": [ 00:20:56.745 { 00:20:56.745 "name": "BaseBdev1", 00:20:56.745 "uuid": "a205c7fa-791f-42b3-b16f-43663029cacc", 00:20:56.745 "is_configured": true, 00:20:56.745 "data_offset": 2048, 00:20:56.745 "data_size": 63488 00:20:56.745 }, 00:20:56.745 { 00:20:56.745 "name": null, 00:20:56.745 "uuid": "961d11ba-19aa-449a-9bc9-b593e9b0d6c8", 00:20:56.745 "is_configured": false, 00:20:56.745 "data_offset": 0, 00:20:56.745 "data_size": 63488 00:20:56.745 }, 00:20:56.745 { 00:20:56.745 "name": null, 00:20:56.745 "uuid": "85fea538-0ff6-4e8e-9b4c-124d906b997d", 00:20:56.745 "is_configured": false, 00:20:56.745 "data_offset": 0, 00:20:56.745 "data_size": 63488 00:20:56.745 } 00:20:56.745 ] 00:20:56.745 }' 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.745 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.312 [2024-11-06 09:13:33.717499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:57.312 "name": "Existed_Raid", 00:20:57.312 "uuid": "094564fb-aa6f-43f9-b322-c59d522d9213", 00:20:57.312 "strip_size_kb": 64, 00:20:57.312 "state": "configuring", 00:20:57.312 "raid_level": "raid5f", 00:20:57.312 "superblock": true, 00:20:57.312 "num_base_bdevs": 3, 00:20:57.312 "num_base_bdevs_discovered": 2, 00:20:57.312 "num_base_bdevs_operational": 3, 00:20:57.312 "base_bdevs_list": [ 00:20:57.312 { 00:20:57.312 "name": "BaseBdev1", 00:20:57.312 "uuid": "a205c7fa-791f-42b3-b16f-43663029cacc", 00:20:57.312 "is_configured": true, 00:20:57.312 "data_offset": 2048, 00:20:57.312 "data_size": 63488 00:20:57.312 }, 00:20:57.312 { 00:20:57.312 "name": null, 00:20:57.312 "uuid": "961d11ba-19aa-449a-9bc9-b593e9b0d6c8", 00:20:57.312 "is_configured": false, 00:20:57.312 "data_offset": 0, 00:20:57.312 "data_size": 63488 00:20:57.312 }, 00:20:57.312 { 00:20:57.312 "name": "BaseBdev3", 00:20:57.312 "uuid": "85fea538-0ff6-4e8e-9b4c-124d906b997d", 00:20:57.312 "is_configured": true, 00:20:57.312 "data_offset": 2048, 00:20:57.312 "data_size": 63488 00:20:57.312 } 00:20:57.312 ] 00:20:57.312 }' 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:57.312 09:13:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.878 [2024-11-06 09:13:34.309629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.878 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.136 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.136 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.136 "name": "Existed_Raid", 00:20:58.136 "uuid": "094564fb-aa6f-43f9-b322-c59d522d9213", 00:20:58.136 "strip_size_kb": 64, 00:20:58.136 "state": "configuring", 00:20:58.136 "raid_level": "raid5f", 00:20:58.136 "superblock": true, 00:20:58.136 "num_base_bdevs": 3, 00:20:58.136 "num_base_bdevs_discovered": 1, 00:20:58.136 "num_base_bdevs_operational": 3, 00:20:58.136 "base_bdevs_list": [ 00:20:58.136 { 00:20:58.136 "name": null, 00:20:58.136 "uuid": "a205c7fa-791f-42b3-b16f-43663029cacc", 00:20:58.136 "is_configured": false, 00:20:58.136 "data_offset": 0, 00:20:58.136 "data_size": 63488 00:20:58.136 }, 00:20:58.136 { 00:20:58.136 "name": null, 00:20:58.136 "uuid": "961d11ba-19aa-449a-9bc9-b593e9b0d6c8", 00:20:58.136 "is_configured": false, 00:20:58.136 "data_offset": 0, 00:20:58.136 "data_size": 63488 00:20:58.136 }, 00:20:58.136 { 00:20:58.136 "name": "BaseBdev3", 00:20:58.136 "uuid": "85fea538-0ff6-4e8e-9b4c-124d906b997d", 00:20:58.136 "is_configured": true, 00:20:58.136 "data_offset": 2048, 00:20:58.136 "data_size": 63488 00:20:58.136 } 00:20:58.136 ] 00:20:58.136 }' 00:20:58.136 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.136 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.394 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.653 [2024-11-06 09:13:34.986136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.653 09:13:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.653 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.653 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.653 "name": "Existed_Raid", 00:20:58.653 "uuid": "094564fb-aa6f-43f9-b322-c59d522d9213", 00:20:58.653 "strip_size_kb": 64, 00:20:58.653 "state": "configuring", 00:20:58.653 "raid_level": "raid5f", 00:20:58.653 "superblock": true, 00:20:58.653 "num_base_bdevs": 3, 00:20:58.653 "num_base_bdevs_discovered": 2, 00:20:58.653 "num_base_bdevs_operational": 3, 00:20:58.653 "base_bdevs_list": [ 00:20:58.653 { 00:20:58.653 "name": null, 00:20:58.653 "uuid": "a205c7fa-791f-42b3-b16f-43663029cacc", 00:20:58.653 "is_configured": false, 00:20:58.653 "data_offset": 0, 00:20:58.653 "data_size": 63488 00:20:58.653 }, 00:20:58.653 { 00:20:58.653 "name": "BaseBdev2", 00:20:58.653 "uuid": "961d11ba-19aa-449a-9bc9-b593e9b0d6c8", 00:20:58.653 "is_configured": true, 00:20:58.653 "data_offset": 2048, 00:20:58.653 "data_size": 63488 00:20:58.653 }, 00:20:58.653 { 00:20:58.653 "name": "BaseBdev3", 00:20:58.653 "uuid": "85fea538-0ff6-4e8e-9b4c-124d906b997d", 00:20:58.653 "is_configured": true, 00:20:58.653 "data_offset": 2048, 00:20:58.653 "data_size": 63488 00:20:58.653 } 00:20:58.653 ] 00:20:58.653 }' 00:20:58.653 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.653 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a205c7fa-791f-42b3-b16f-43663029cacc 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.220 [2024-11-06 09:13:35.649706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:59.220 NewBaseBdev 00:20:59.220 [2024-11-06 09:13:35.650227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:59.220 [2024-11-06 09:13:35.650260] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:20:59.220 [2024-11-06 09:13:35.650581] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.220 [2024-11-06 09:13:35.655587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:59.220 [2024-11-06 09:13:35.655754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:59.220 [2024-11-06 09:13:35.656224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.220 [ 00:20:59.220 { 00:20:59.220 "name": "NewBaseBdev", 00:20:59.220 "aliases": [ 00:20:59.220 "a205c7fa-791f-42b3-b16f-43663029cacc" 00:20:59.220 ], 00:20:59.220 "product_name": "Malloc disk", 00:20:59.220 "block_size": 512, 00:20:59.220 "num_blocks": 65536, 00:20:59.220 "uuid": "a205c7fa-791f-42b3-b16f-43663029cacc", 00:20:59.220 "assigned_rate_limits": { 00:20:59.220 "rw_ios_per_sec": 0, 00:20:59.220 "rw_mbytes_per_sec": 0, 00:20:59.220 "r_mbytes_per_sec": 0, 00:20:59.220 "w_mbytes_per_sec": 0 00:20:59.220 }, 00:20:59.220 "claimed": true, 00:20:59.220 "claim_type": "exclusive_write", 00:20:59.220 "zoned": false, 00:20:59.220 "supported_io_types": { 00:20:59.220 "read": true, 00:20:59.220 "write": true, 00:20:59.220 "unmap": true, 00:20:59.220 "flush": true, 00:20:59.220 "reset": true, 00:20:59.220 "nvme_admin": false, 00:20:59.220 "nvme_io": false, 00:20:59.220 "nvme_io_md": false, 00:20:59.220 "write_zeroes": true, 00:20:59.220 "zcopy": true, 00:20:59.220 "get_zone_info": false, 00:20:59.220 "zone_management": false, 00:20:59.220 "zone_append": false, 00:20:59.220 "compare": false, 00:20:59.220 "compare_and_write": false, 00:20:59.220 "abort": true, 00:20:59.220 "seek_hole": false, 00:20:59.220 "seek_data": false, 00:20:59.220 "copy": true, 00:20:59.220 "nvme_iov_md": false 00:20:59.220 }, 00:20:59.220 "memory_domains": [ 00:20:59.220 { 00:20:59.220 "dma_device_id": "system", 00:20:59.220 "dma_device_type": 1 00:20:59.220 }, 00:20:59.220 { 00:20:59.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:59.220 "dma_device_type": 2 00:20:59.220 } 00:20:59.220 ], 00:20:59.220 "driver_specific": {} 00:20:59.220 } 00:20:59.220 ] 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:59.220 "name": "Existed_Raid", 00:20:59.220 "uuid": "094564fb-aa6f-43f9-b322-c59d522d9213", 00:20:59.220 "strip_size_kb": 64, 00:20:59.220 "state": "online", 00:20:59.220 "raid_level": "raid5f", 00:20:59.220 "superblock": true, 00:20:59.220 "num_base_bdevs": 3, 00:20:59.220 "num_base_bdevs_discovered": 3, 00:20:59.220 "num_base_bdevs_operational": 3, 00:20:59.220 "base_bdevs_list": [ 00:20:59.220 { 00:20:59.220 "name": "NewBaseBdev", 00:20:59.220 "uuid": "a205c7fa-791f-42b3-b16f-43663029cacc", 00:20:59.220 "is_configured": true, 00:20:59.220 "data_offset": 2048, 00:20:59.220 "data_size": 63488 00:20:59.220 }, 00:20:59.220 { 00:20:59.220 "name": "BaseBdev2", 00:20:59.220 "uuid": "961d11ba-19aa-449a-9bc9-b593e9b0d6c8", 00:20:59.220 "is_configured": true, 00:20:59.220 "data_offset": 2048, 00:20:59.220 "data_size": 63488 00:20:59.220 }, 00:20:59.220 { 00:20:59.220 "name": "BaseBdev3", 00:20:59.220 "uuid": "85fea538-0ff6-4e8e-9b4c-124d906b997d", 00:20:59.220 "is_configured": true, 00:20:59.220 "data_offset": 2048, 00:20:59.220 "data_size": 63488 00:20:59.220 } 00:20:59.220 ] 00:20:59.220 }' 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:59.220 09:13:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.787 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:59.787 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:59.787 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:59.787 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:59.787 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:59.787 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:59.787 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:59.787 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:59.787 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.787 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:59.787 [2024-11-06 09:13:36.218485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:59.787 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.787 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:59.788 "name": "Existed_Raid", 00:20:59.788 "aliases": [ 00:20:59.788 "094564fb-aa6f-43f9-b322-c59d522d9213" 00:20:59.788 ], 00:20:59.788 "product_name": "Raid Volume", 00:20:59.788 "block_size": 512, 00:20:59.788 "num_blocks": 126976, 00:20:59.788 "uuid": "094564fb-aa6f-43f9-b322-c59d522d9213", 00:20:59.788 "assigned_rate_limits": { 00:20:59.788 "rw_ios_per_sec": 0, 00:20:59.788 "rw_mbytes_per_sec": 0, 00:20:59.788 "r_mbytes_per_sec": 0, 00:20:59.788 "w_mbytes_per_sec": 0 00:20:59.788 }, 00:20:59.788 "claimed": false, 00:20:59.788 "zoned": false, 00:20:59.788 "supported_io_types": { 00:20:59.788 "read": true, 00:20:59.788 "write": true, 00:20:59.788 "unmap": false, 00:20:59.788 "flush": false, 00:20:59.788 "reset": true, 00:20:59.788 "nvme_admin": false, 00:20:59.788 "nvme_io": false, 00:20:59.788 "nvme_io_md": false, 00:20:59.788 "write_zeroes": true, 00:20:59.788 "zcopy": false, 00:20:59.788 "get_zone_info": false, 00:20:59.788 "zone_management": false, 00:20:59.788 "zone_append": false, 00:20:59.788 "compare": false, 00:20:59.788 "compare_and_write": false, 00:20:59.788 "abort": false, 00:20:59.788 "seek_hole": false, 00:20:59.788 "seek_data": false, 00:20:59.788 "copy": false, 00:20:59.788 "nvme_iov_md": false 00:20:59.788 }, 00:20:59.788 "driver_specific": { 00:20:59.788 "raid": { 00:20:59.788 "uuid": "094564fb-aa6f-43f9-b322-c59d522d9213", 00:20:59.788 "strip_size_kb": 64, 00:20:59.788 "state": "online", 00:20:59.788 "raid_level": "raid5f", 00:20:59.788 "superblock": true, 00:20:59.788 "num_base_bdevs": 3, 00:20:59.788 "num_base_bdevs_discovered": 3, 00:20:59.788 "num_base_bdevs_operational": 3, 00:20:59.788 "base_bdevs_list": [ 00:20:59.788 { 00:20:59.788 "name": "NewBaseBdev", 00:20:59.788 "uuid": "a205c7fa-791f-42b3-b16f-43663029cacc", 00:20:59.788 "is_configured": true, 00:20:59.788 "data_offset": 2048, 00:20:59.788 "data_size": 63488 00:20:59.788 }, 00:20:59.788 { 00:20:59.788 "name": "BaseBdev2", 00:20:59.788 "uuid": "961d11ba-19aa-449a-9bc9-b593e9b0d6c8", 00:20:59.788 "is_configured": true, 00:20:59.788 "data_offset": 2048, 00:20:59.788 "data_size": 63488 00:20:59.788 }, 00:20:59.788 { 00:20:59.788 "name": "BaseBdev3", 00:20:59.788 "uuid": "85fea538-0ff6-4e8e-9b4c-124d906b997d", 00:20:59.788 "is_configured": true, 00:20:59.788 "data_offset": 2048, 00:20:59.788 "data_size": 63488 00:20:59.788 } 00:20:59.788 ] 00:20:59.788 } 00:20:59.788 } 00:20:59.788 }' 00:20:59.788 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:59.788 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:59.788 BaseBdev2 00:20:59.788 BaseBdev3' 00:20:59.788 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:00.047 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.048 [2024-11-06 09:13:36.546471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:00.048 [2024-11-06 09:13:36.546794] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.048 [2024-11-06 09:13:36.547040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.048 [2024-11-06 09:13:36.547462] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:00.048 [2024-11-06 09:13:36.547486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80768 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80768 ']' 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80768 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:00.048 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80768 00:21:00.307 killing process with pid 80768 00:21:00.307 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:00.307 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:00.307 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80768' 00:21:00.307 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80768 00:21:00.307 [2024-11-06 09:13:36.592001] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:00.307 09:13:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80768 00:21:00.565 [2024-11-06 09:13:36.856959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:01.501 ************************************ 00:21:01.501 END TEST raid5f_state_function_test_sb 00:21:01.501 ************************************ 00:21:01.501 09:13:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:21:01.501 00:21:01.501 real 0m11.743s 00:21:01.501 user 0m19.506s 00:21:01.501 sys 0m1.622s 00:21:01.501 09:13:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:01.501 09:13:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.501 09:13:37 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:21:01.501 09:13:37 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:21:01.501 09:13:37 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:01.501 09:13:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:01.501 ************************************ 00:21:01.501 START TEST raid5f_superblock_test 00:21:01.501 ************************************ 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81400 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81400 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81400 ']' 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:01.501 09:13:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.820 [2024-11-06 09:13:38.068462] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:21:01.820 [2024-11-06 09:13:38.068906] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81400 ] 00:21:01.820 [2024-11-06 09:13:38.256469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.079 [2024-11-06 09:13:38.386794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.079 [2024-11-06 09:13:38.595114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:02.079 [2024-11-06 09:13:38.595463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.646 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.906 malloc1 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.906 [2024-11-06 09:13:39.201485] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:02.906 [2024-11-06 09:13:39.201903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.906 [2024-11-06 09:13:39.201963] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:02.906 [2024-11-06 09:13:39.201980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.906 [2024-11-06 09:13:39.204940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.906 [2024-11-06 09:13:39.204985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:02.906 pt1 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.906 malloc2 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.906 [2024-11-06 09:13:39.252194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:02.906 [2024-11-06 09:13:39.252269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.906 [2024-11-06 09:13:39.252299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:02.906 [2024-11-06 09:13:39.252313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.906 [2024-11-06 09:13:39.255216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.906 pt2 00:21:02.906 [2024-11-06 09:13:39.255472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.906 malloc3 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.906 [2024-11-06 09:13:39.316642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:02.906 [2024-11-06 09:13:39.316975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.906 [2024-11-06 09:13:39.317054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:02.906 [2024-11-06 09:13:39.317173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.906 [2024-11-06 09:13:39.319973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.906 pt3 00:21:02.906 [2024-11-06 09:13:39.320143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.906 [2024-11-06 09:13:39.328826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:02.906 [2024-11-06 09:13:39.331406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:02.906 [2024-11-06 09:13:39.331647] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:02.906 [2024-11-06 09:13:39.332018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:02.906 [2024-11-06 09:13:39.332196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:02.906 [2024-11-06 09:13:39.332534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:02.906 [2024-11-06 09:13:39.337762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:02.906 [2024-11-06 09:13:39.337951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:02.906 [2024-11-06 09:13:39.338356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.906 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.906 "name": "raid_bdev1", 00:21:02.906 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:02.906 "strip_size_kb": 64, 00:21:02.906 "state": "online", 00:21:02.906 "raid_level": "raid5f", 00:21:02.906 "superblock": true, 00:21:02.906 "num_base_bdevs": 3, 00:21:02.906 "num_base_bdevs_discovered": 3, 00:21:02.906 "num_base_bdevs_operational": 3, 00:21:02.906 "base_bdevs_list": [ 00:21:02.906 { 00:21:02.906 "name": "pt1", 00:21:02.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:02.906 "is_configured": true, 00:21:02.906 "data_offset": 2048, 00:21:02.906 "data_size": 63488 00:21:02.906 }, 00:21:02.906 { 00:21:02.906 "name": "pt2", 00:21:02.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.906 "is_configured": true, 00:21:02.906 "data_offset": 2048, 00:21:02.906 "data_size": 63488 00:21:02.906 }, 00:21:02.906 { 00:21:02.906 "name": "pt3", 00:21:02.907 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:02.907 "is_configured": true, 00:21:02.907 "data_offset": 2048, 00:21:02.907 "data_size": 63488 00:21:02.907 } 00:21:02.907 ] 00:21:02.907 }' 00:21:02.907 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.907 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:03.474 [2024-11-06 09:13:39.917429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:03.474 "name": "raid_bdev1", 00:21:03.474 "aliases": [ 00:21:03.474 "379fc0df-340d-4e20-bc51-786e6e925895" 00:21:03.474 ], 00:21:03.474 "product_name": "Raid Volume", 00:21:03.474 "block_size": 512, 00:21:03.474 "num_blocks": 126976, 00:21:03.474 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:03.474 "assigned_rate_limits": { 00:21:03.474 "rw_ios_per_sec": 0, 00:21:03.474 "rw_mbytes_per_sec": 0, 00:21:03.474 "r_mbytes_per_sec": 0, 00:21:03.474 "w_mbytes_per_sec": 0 00:21:03.474 }, 00:21:03.474 "claimed": false, 00:21:03.474 "zoned": false, 00:21:03.474 "supported_io_types": { 00:21:03.474 "read": true, 00:21:03.474 "write": true, 00:21:03.474 "unmap": false, 00:21:03.474 "flush": false, 00:21:03.474 "reset": true, 00:21:03.474 "nvme_admin": false, 00:21:03.474 "nvme_io": false, 00:21:03.474 "nvme_io_md": false, 00:21:03.474 "write_zeroes": true, 00:21:03.474 "zcopy": false, 00:21:03.474 "get_zone_info": false, 00:21:03.474 "zone_management": false, 00:21:03.474 "zone_append": false, 00:21:03.474 "compare": false, 00:21:03.474 "compare_and_write": false, 00:21:03.474 "abort": false, 00:21:03.474 "seek_hole": false, 00:21:03.474 "seek_data": false, 00:21:03.474 "copy": false, 00:21:03.474 "nvme_iov_md": false 00:21:03.474 }, 00:21:03.474 "driver_specific": { 00:21:03.474 "raid": { 00:21:03.474 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:03.474 "strip_size_kb": 64, 00:21:03.474 "state": "online", 00:21:03.474 "raid_level": "raid5f", 00:21:03.474 "superblock": true, 00:21:03.474 "num_base_bdevs": 3, 00:21:03.474 "num_base_bdevs_discovered": 3, 00:21:03.474 "num_base_bdevs_operational": 3, 00:21:03.474 "base_bdevs_list": [ 00:21:03.474 { 00:21:03.474 "name": "pt1", 00:21:03.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:03.474 "is_configured": true, 00:21:03.474 "data_offset": 2048, 00:21:03.474 "data_size": 63488 00:21:03.474 }, 00:21:03.474 { 00:21:03.474 "name": "pt2", 00:21:03.474 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:03.474 "is_configured": true, 00:21:03.474 "data_offset": 2048, 00:21:03.474 "data_size": 63488 00:21:03.474 }, 00:21:03.474 { 00:21:03.474 "name": "pt3", 00:21:03.474 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:03.474 "is_configured": true, 00:21:03.474 "data_offset": 2048, 00:21:03.474 "data_size": 63488 00:21:03.474 } 00:21:03.474 ] 00:21:03.474 } 00:21:03.474 } 00:21:03.474 }' 00:21:03.474 09:13:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:03.732 pt2 00:21:03.732 pt3' 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.732 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.733 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.733 [2024-11-06 09:13:40.261436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=379fc0df-340d-4e20-bc51-786e6e925895 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 379fc0df-340d-4e20-bc51-786e6e925895 ']' 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.991 [2024-11-06 09:13:40.309153] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:03.991 [2024-11-06 09:13:40.309425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:03.991 [2024-11-06 09:13:40.309725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:03.991 [2024-11-06 09:13:40.309836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:03.991 [2024-11-06 09:13:40.309868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:03.991 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.992 [2024-11-06 09:13:40.449319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:03.992 [2024-11-06 09:13:40.452072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:03.992 [2024-11-06 09:13:40.452149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:03.992 [2024-11-06 09:13:40.452263] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:03.992 [2024-11-06 09:13:40.452354] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:03.992 [2024-11-06 09:13:40.452388] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:03.992 [2024-11-06 09:13:40.452414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:03.992 [2024-11-06 09:13:40.452427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:03.992 request: 00:21:03.992 { 00:21:03.992 "name": "raid_bdev1", 00:21:03.992 "raid_level": "raid5f", 00:21:03.992 "base_bdevs": [ 00:21:03.992 "malloc1", 00:21:03.992 "malloc2", 00:21:03.992 "malloc3" 00:21:03.992 ], 00:21:03.992 "strip_size_kb": 64, 00:21:03.992 "superblock": false, 00:21:03.992 "method": "bdev_raid_create", 00:21:03.992 "req_id": 1 00:21:03.992 } 00:21:03.992 Got JSON-RPC error response 00:21:03.992 response: 00:21:03.992 { 00:21:03.992 "code": -17, 00:21:03.992 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:03.992 } 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.992 [2024-11-06 09:13:40.517204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:03.992 [2024-11-06 09:13:40.517463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.992 [2024-11-06 09:13:40.517640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:03.992 [2024-11-06 09:13:40.517771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.992 [2024-11-06 09:13:40.520942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.992 [2024-11-06 09:13:40.521099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:03.992 [2024-11-06 09:13:40.521363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:03.992 [2024-11-06 09:13:40.521534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:03.992 pt1 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.992 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:04.267 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.267 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:04.267 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:04.267 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:04.267 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:04.267 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.267 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.268 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.268 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.268 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.268 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.268 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.268 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.268 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.268 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.268 "name": "raid_bdev1", 00:21:04.268 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:04.268 "strip_size_kb": 64, 00:21:04.268 "state": "configuring", 00:21:04.268 "raid_level": "raid5f", 00:21:04.268 "superblock": true, 00:21:04.268 "num_base_bdevs": 3, 00:21:04.268 "num_base_bdevs_discovered": 1, 00:21:04.268 "num_base_bdevs_operational": 3, 00:21:04.268 "base_bdevs_list": [ 00:21:04.268 { 00:21:04.268 "name": "pt1", 00:21:04.268 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:04.268 "is_configured": true, 00:21:04.268 "data_offset": 2048, 00:21:04.268 "data_size": 63488 00:21:04.268 }, 00:21:04.268 { 00:21:04.268 "name": null, 00:21:04.268 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.268 "is_configured": false, 00:21:04.268 "data_offset": 2048, 00:21:04.268 "data_size": 63488 00:21:04.268 }, 00:21:04.268 { 00:21:04.268 "name": null, 00:21:04.268 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:04.268 "is_configured": false, 00:21:04.268 "data_offset": 2048, 00:21:04.268 "data_size": 63488 00:21:04.268 } 00:21:04.268 ] 00:21:04.268 }' 00:21:04.268 09:13:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.268 09:13:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.556 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.557 [2024-11-06 09:13:41.049601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:04.557 [2024-11-06 09:13:41.049702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:04.557 [2024-11-06 09:13:41.049733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:04.557 [2024-11-06 09:13:41.049747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:04.557 [2024-11-06 09:13:41.050389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:04.557 [2024-11-06 09:13:41.050430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:04.557 [2024-11-06 09:13:41.050535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:04.557 [2024-11-06 09:13:41.050596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:04.557 pt2 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.557 [2024-11-06 09:13:41.057580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.557 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.814 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.814 "name": "raid_bdev1", 00:21:04.814 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:04.814 "strip_size_kb": 64, 00:21:04.814 "state": "configuring", 00:21:04.814 "raid_level": "raid5f", 00:21:04.814 "superblock": true, 00:21:04.814 "num_base_bdevs": 3, 00:21:04.814 "num_base_bdevs_discovered": 1, 00:21:04.814 "num_base_bdevs_operational": 3, 00:21:04.814 "base_bdevs_list": [ 00:21:04.814 { 00:21:04.814 "name": "pt1", 00:21:04.814 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:04.814 "is_configured": true, 00:21:04.814 "data_offset": 2048, 00:21:04.814 "data_size": 63488 00:21:04.814 }, 00:21:04.814 { 00:21:04.814 "name": null, 00:21:04.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:04.814 "is_configured": false, 00:21:04.814 "data_offset": 0, 00:21:04.814 "data_size": 63488 00:21:04.814 }, 00:21:04.814 { 00:21:04.814 "name": null, 00:21:04.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:04.814 "is_configured": false, 00:21:04.814 "data_offset": 2048, 00:21:04.814 "data_size": 63488 00:21:04.814 } 00:21:04.814 ] 00:21:04.814 }' 00:21:04.814 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.814 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.073 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:05.073 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:05.073 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:05.073 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.073 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.073 [2024-11-06 09:13:41.605718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:05.073 [2024-11-06 09:13:41.606041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.073 [2024-11-06 09:13:41.606081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:05.073 [2024-11-06 09:13:41.606100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.073 [2024-11-06 09:13:41.606700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.073 [2024-11-06 09:13:41.606732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:05.073 [2024-11-06 09:13:41.606855] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:05.073 [2024-11-06 09:13:41.606894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:05.332 pt2 00:21:05.332 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.332 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:05.332 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:05.332 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:05.332 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.332 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.332 [2024-11-06 09:13:41.617698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:05.332 [2024-11-06 09:13:41.617757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.332 [2024-11-06 09:13:41.617796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:05.332 [2024-11-06 09:13:41.617812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.332 [2024-11-06 09:13:41.618278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.332 [2024-11-06 09:13:41.618328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:05.332 [2024-11-06 09:13:41.618409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:05.332 [2024-11-06 09:13:41.618442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:05.332 [2024-11-06 09:13:41.618614] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:05.332 [2024-11-06 09:13:41.618643] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:05.332 [2024-11-06 09:13:41.618980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:05.332 pt3 00:21:05.332 [2024-11-06 09:13:41.623962] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:05.332 [2024-11-06 09:13:41.623988] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:05.332 [2024-11-06 09:13:41.624226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.332 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.332 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:05.332 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:05.332 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:05.332 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.332 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.333 "name": "raid_bdev1", 00:21:05.333 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:05.333 "strip_size_kb": 64, 00:21:05.333 "state": "online", 00:21:05.333 "raid_level": "raid5f", 00:21:05.333 "superblock": true, 00:21:05.333 "num_base_bdevs": 3, 00:21:05.333 "num_base_bdevs_discovered": 3, 00:21:05.333 "num_base_bdevs_operational": 3, 00:21:05.333 "base_bdevs_list": [ 00:21:05.333 { 00:21:05.333 "name": "pt1", 00:21:05.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:05.333 "is_configured": true, 00:21:05.333 "data_offset": 2048, 00:21:05.333 "data_size": 63488 00:21:05.333 }, 00:21:05.333 { 00:21:05.333 "name": "pt2", 00:21:05.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:05.333 "is_configured": true, 00:21:05.333 "data_offset": 2048, 00:21:05.333 "data_size": 63488 00:21:05.333 }, 00:21:05.333 { 00:21:05.333 "name": "pt3", 00:21:05.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:05.333 "is_configured": true, 00:21:05.333 "data_offset": 2048, 00:21:05.333 "data_size": 63488 00:21:05.333 } 00:21:05.333 ] 00:21:05.333 }' 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.333 09:13:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.900 [2024-11-06 09:13:42.186811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:05.900 "name": "raid_bdev1", 00:21:05.900 "aliases": [ 00:21:05.900 "379fc0df-340d-4e20-bc51-786e6e925895" 00:21:05.900 ], 00:21:05.900 "product_name": "Raid Volume", 00:21:05.900 "block_size": 512, 00:21:05.900 "num_blocks": 126976, 00:21:05.900 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:05.900 "assigned_rate_limits": { 00:21:05.900 "rw_ios_per_sec": 0, 00:21:05.900 "rw_mbytes_per_sec": 0, 00:21:05.900 "r_mbytes_per_sec": 0, 00:21:05.900 "w_mbytes_per_sec": 0 00:21:05.900 }, 00:21:05.900 "claimed": false, 00:21:05.900 "zoned": false, 00:21:05.900 "supported_io_types": { 00:21:05.900 "read": true, 00:21:05.900 "write": true, 00:21:05.900 "unmap": false, 00:21:05.900 "flush": false, 00:21:05.900 "reset": true, 00:21:05.900 "nvme_admin": false, 00:21:05.900 "nvme_io": false, 00:21:05.900 "nvme_io_md": false, 00:21:05.900 "write_zeroes": true, 00:21:05.900 "zcopy": false, 00:21:05.900 "get_zone_info": false, 00:21:05.900 "zone_management": false, 00:21:05.900 "zone_append": false, 00:21:05.900 "compare": false, 00:21:05.900 "compare_and_write": false, 00:21:05.900 "abort": false, 00:21:05.900 "seek_hole": false, 00:21:05.900 "seek_data": false, 00:21:05.900 "copy": false, 00:21:05.900 "nvme_iov_md": false 00:21:05.900 }, 00:21:05.900 "driver_specific": { 00:21:05.900 "raid": { 00:21:05.900 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:05.900 "strip_size_kb": 64, 00:21:05.900 "state": "online", 00:21:05.900 "raid_level": "raid5f", 00:21:05.900 "superblock": true, 00:21:05.900 "num_base_bdevs": 3, 00:21:05.900 "num_base_bdevs_discovered": 3, 00:21:05.900 "num_base_bdevs_operational": 3, 00:21:05.900 "base_bdevs_list": [ 00:21:05.900 { 00:21:05.900 "name": "pt1", 00:21:05.900 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:05.900 "is_configured": true, 00:21:05.900 "data_offset": 2048, 00:21:05.900 "data_size": 63488 00:21:05.900 }, 00:21:05.900 { 00:21:05.900 "name": "pt2", 00:21:05.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:05.900 "is_configured": true, 00:21:05.900 "data_offset": 2048, 00:21:05.900 "data_size": 63488 00:21:05.900 }, 00:21:05.900 { 00:21:05.900 "name": "pt3", 00:21:05.900 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:05.900 "is_configured": true, 00:21:05.900 "data_offset": 2048, 00:21:05.900 "data_size": 63488 00:21:05.900 } 00:21:05.900 ] 00:21:05.900 } 00:21:05.900 } 00:21:05.900 }' 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:05.900 pt2 00:21:05.900 pt3' 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.900 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.901 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.159 [2024-11-06 09:13:42.518875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 379fc0df-340d-4e20-bc51-786e6e925895 '!=' 379fc0df-340d-4e20-bc51-786e6e925895 ']' 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.159 [2024-11-06 09:13:42.566669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.159 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.159 "name": "raid_bdev1", 00:21:06.160 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:06.160 "strip_size_kb": 64, 00:21:06.160 "state": "online", 00:21:06.160 "raid_level": "raid5f", 00:21:06.160 "superblock": true, 00:21:06.160 "num_base_bdevs": 3, 00:21:06.160 "num_base_bdevs_discovered": 2, 00:21:06.160 "num_base_bdevs_operational": 2, 00:21:06.160 "base_bdevs_list": [ 00:21:06.160 { 00:21:06.160 "name": null, 00:21:06.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.160 "is_configured": false, 00:21:06.160 "data_offset": 0, 00:21:06.160 "data_size": 63488 00:21:06.160 }, 00:21:06.160 { 00:21:06.160 "name": "pt2", 00:21:06.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:06.160 "is_configured": true, 00:21:06.160 "data_offset": 2048, 00:21:06.160 "data_size": 63488 00:21:06.160 }, 00:21:06.160 { 00:21:06.160 "name": "pt3", 00:21:06.160 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:06.160 "is_configured": true, 00:21:06.160 "data_offset": 2048, 00:21:06.160 "data_size": 63488 00:21:06.160 } 00:21:06.160 ] 00:21:06.160 }' 00:21:06.160 09:13:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.160 09:13:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.727 [2024-11-06 09:13:43.062777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:06.727 [2024-11-06 09:13:43.062958] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:06.727 [2024-11-06 09:13:43.063084] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:06.727 [2024-11-06 09:13:43.063163] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:06.727 [2024-11-06 09:13:43.063187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.727 [2024-11-06 09:13:43.142749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:06.727 [2024-11-06 09:13:43.142974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.727 [2024-11-06 09:13:43.143011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:06.727 [2024-11-06 09:13:43.143030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.727 [2024-11-06 09:13:43.146105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.727 [2024-11-06 09:13:43.146156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:06.727 [2024-11-06 09:13:43.146256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:06.727 [2024-11-06 09:13:43.146323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:06.727 pt2 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.727 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.728 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.728 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.728 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.728 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.728 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:06.728 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.728 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.728 "name": "raid_bdev1", 00:21:06.728 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:06.728 "strip_size_kb": 64, 00:21:06.728 "state": "configuring", 00:21:06.728 "raid_level": "raid5f", 00:21:06.728 "superblock": true, 00:21:06.728 "num_base_bdevs": 3, 00:21:06.728 "num_base_bdevs_discovered": 1, 00:21:06.728 "num_base_bdevs_operational": 2, 00:21:06.728 "base_bdevs_list": [ 00:21:06.728 { 00:21:06.728 "name": null, 00:21:06.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.728 "is_configured": false, 00:21:06.728 "data_offset": 2048, 00:21:06.728 "data_size": 63488 00:21:06.728 }, 00:21:06.728 { 00:21:06.728 "name": "pt2", 00:21:06.728 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:06.728 "is_configured": true, 00:21:06.728 "data_offset": 2048, 00:21:06.728 "data_size": 63488 00:21:06.728 }, 00:21:06.728 { 00:21:06.728 "name": null, 00:21:06.728 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:06.728 "is_configured": false, 00:21:06.728 "data_offset": 2048, 00:21:06.728 "data_size": 63488 00:21:06.728 } 00:21:06.728 ] 00:21:06.728 }' 00:21:06.728 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.728 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.296 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:21:07.296 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:07.296 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:21:07.296 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:07.296 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.296 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.296 [2024-11-06 09:13:43.698958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:07.296 [2024-11-06 09:13:43.699179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.296 [2024-11-06 09:13:43.699260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:07.296 [2024-11-06 09:13:43.699390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.296 [2024-11-06 09:13:43.700062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.296 [2024-11-06 09:13:43.700228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:07.296 [2024-11-06 09:13:43.700348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:07.296 [2024-11-06 09:13:43.700399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:07.296 [2024-11-06 09:13:43.700551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:07.296 [2024-11-06 09:13:43.700573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:07.296 [2024-11-06 09:13:43.700906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:07.296 pt3 00:21:07.296 [2024-11-06 09:13:43.705958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:07.296 [2024-11-06 09:13:43.705986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:07.296 [2024-11-06 09:13:43.706414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.296 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.296 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.297 "name": "raid_bdev1", 00:21:07.297 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:07.297 "strip_size_kb": 64, 00:21:07.297 "state": "online", 00:21:07.297 "raid_level": "raid5f", 00:21:07.297 "superblock": true, 00:21:07.297 "num_base_bdevs": 3, 00:21:07.297 "num_base_bdevs_discovered": 2, 00:21:07.297 "num_base_bdevs_operational": 2, 00:21:07.297 "base_bdevs_list": [ 00:21:07.297 { 00:21:07.297 "name": null, 00:21:07.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.297 "is_configured": false, 00:21:07.297 "data_offset": 2048, 00:21:07.297 "data_size": 63488 00:21:07.297 }, 00:21:07.297 { 00:21:07.297 "name": "pt2", 00:21:07.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.297 "is_configured": true, 00:21:07.297 "data_offset": 2048, 00:21:07.297 "data_size": 63488 00:21:07.297 }, 00:21:07.297 { 00:21:07.297 "name": "pt3", 00:21:07.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:07.297 "is_configured": true, 00:21:07.297 "data_offset": 2048, 00:21:07.297 "data_size": 63488 00:21:07.297 } 00:21:07.297 ] 00:21:07.297 }' 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.297 09:13:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.863 [2024-11-06 09:13:44.248156] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.863 [2024-11-06 09:13:44.248326] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:07.863 [2024-11-06 09:13:44.248597] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.863 [2024-11-06 09:13:44.248713] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.863 [2024-11-06 09:13:44.248731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.863 [2024-11-06 09:13:44.320246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:07.863 [2024-11-06 09:13:44.320505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.863 [2024-11-06 09:13:44.320547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:07.863 [2024-11-06 09:13:44.320563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.863 [2024-11-06 09:13:44.323680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.863 pt1 00:21:07.863 [2024-11-06 09:13:44.323909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:07.863 [2024-11-06 09:13:44.324046] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:07.863 [2024-11-06 09:13:44.324108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:07.863 [2024-11-06 09:13:44.324297] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:07.863 [2024-11-06 09:13:44.324315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:07.863 [2024-11-06 09:13:44.324337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:07.863 [2024-11-06 09:13:44.324436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.863 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.863 "name": "raid_bdev1", 00:21:07.863 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:07.863 "strip_size_kb": 64, 00:21:07.863 "state": "configuring", 00:21:07.863 "raid_level": "raid5f", 00:21:07.863 "superblock": true, 00:21:07.863 "num_base_bdevs": 3, 00:21:07.863 "num_base_bdevs_discovered": 1, 00:21:07.863 "num_base_bdevs_operational": 2, 00:21:07.864 "base_bdevs_list": [ 00:21:07.864 { 00:21:07.864 "name": null, 00:21:07.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.864 "is_configured": false, 00:21:07.864 "data_offset": 2048, 00:21:07.864 "data_size": 63488 00:21:07.864 }, 00:21:07.864 { 00:21:07.864 "name": "pt2", 00:21:07.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.864 "is_configured": true, 00:21:07.864 "data_offset": 2048, 00:21:07.864 "data_size": 63488 00:21:07.864 }, 00:21:07.864 { 00:21:07.864 "name": null, 00:21:07.864 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:07.864 "is_configured": false, 00:21:07.864 "data_offset": 2048, 00:21:07.864 "data_size": 63488 00:21:07.864 } 00:21:07.864 ] 00:21:07.864 }' 00:21:07.864 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.864 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.430 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:21:08.430 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:08.430 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.430 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.430 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.430 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:21:08.430 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:08.430 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.430 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.430 [2024-11-06 09:13:44.896629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:08.430 [2024-11-06 09:13:44.896883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.430 [2024-11-06 09:13:44.896928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:08.430 [2024-11-06 09:13:44.896945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.430 [2024-11-06 09:13:44.897558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.430 [2024-11-06 09:13:44.897606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:08.430 [2024-11-06 09:13:44.897712] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:08.430 [2024-11-06 09:13:44.897752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:08.430 [2024-11-06 09:13:44.897927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:08.431 [2024-11-06 09:13:44.897944] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:08.431 [2024-11-06 09:13:44.898246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:08.431 [2024-11-06 09:13:44.903566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:08.431 [2024-11-06 09:13:44.903759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:08.431 [2024-11-06 09:13:44.904294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.431 pt3 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.431 "name": "raid_bdev1", 00:21:08.431 "uuid": "379fc0df-340d-4e20-bc51-786e6e925895", 00:21:08.431 "strip_size_kb": 64, 00:21:08.431 "state": "online", 00:21:08.431 "raid_level": "raid5f", 00:21:08.431 "superblock": true, 00:21:08.431 "num_base_bdevs": 3, 00:21:08.431 "num_base_bdevs_discovered": 2, 00:21:08.431 "num_base_bdevs_operational": 2, 00:21:08.431 "base_bdevs_list": [ 00:21:08.431 { 00:21:08.431 "name": null, 00:21:08.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.431 "is_configured": false, 00:21:08.431 "data_offset": 2048, 00:21:08.431 "data_size": 63488 00:21:08.431 }, 00:21:08.431 { 00:21:08.431 "name": "pt2", 00:21:08.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:08.431 "is_configured": true, 00:21:08.431 "data_offset": 2048, 00:21:08.431 "data_size": 63488 00:21:08.431 }, 00:21:08.431 { 00:21:08.431 "name": "pt3", 00:21:08.431 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:08.431 "is_configured": true, 00:21:08.431 "data_offset": 2048, 00:21:08.431 "data_size": 63488 00:21:08.431 } 00:21:08.431 ] 00:21:08.431 }' 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.431 09:13:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.998 09:13:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:08.998 09:13:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:08.998 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.998 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.998 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.998 09:13:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:08.998 09:13:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:08.998 09:13:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:08.998 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.998 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.998 [2024-11-06 09:13:45.494490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:08.998 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.256 09:13:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 379fc0df-340d-4e20-bc51-786e6e925895 '!=' 379fc0df-340d-4e20-bc51-786e6e925895 ']' 00:21:09.256 09:13:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81400 00:21:09.256 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81400 ']' 00:21:09.256 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81400 00:21:09.256 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:21:09.256 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:09.256 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81400 00:21:09.256 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:09.256 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:09.256 killing process with pid 81400 00:21:09.256 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81400' 00:21:09.256 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 81400 00:21:09.256 [2024-11-06 09:13:45.574786] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:09.256 [2024-11-06 09:13:45.574911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:09.256 09:13:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 81400 00:21:09.256 [2024-11-06 09:13:45.574989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:09.256 [2024-11-06 09:13:45.575008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:09.515 [2024-11-06 09:13:45.847283] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:10.449 09:13:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:21:10.449 00:21:10.449 real 0m8.939s 00:21:10.449 user 0m14.610s 00:21:10.450 sys 0m1.341s 00:21:10.450 09:13:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:10.450 09:13:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.450 ************************************ 00:21:10.450 END TEST raid5f_superblock_test 00:21:10.450 ************************************ 00:21:10.450 09:13:46 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:21:10.450 09:13:46 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:21:10.450 09:13:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:10.450 09:13:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:10.450 09:13:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:10.450 ************************************ 00:21:10.450 START TEST raid5f_rebuild_test 00:21:10.450 ************************************ 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:10.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81851 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81851 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 81851 ']' 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:10.450 09:13:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.709 [2024-11-06 09:13:47.100199] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:21:10.709 [2024-11-06 09:13:47.100674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81851 ] 00:21:10.709 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:10.709 Zero copy mechanism will not be used. 00:21:10.967 [2024-11-06 09:13:47.299229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.967 [2024-11-06 09:13:47.451079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.225 [2024-11-06 09:13:47.659578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.225 [2024-11-06 09:13:47.659938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.793 BaseBdev1_malloc 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.793 [2024-11-06 09:13:48.126192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:11.793 [2024-11-06 09:13:48.126316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.793 [2024-11-06 09:13:48.126348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:11.793 [2024-11-06 09:13:48.126367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.793 [2024-11-06 09:13:48.129386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.793 [2024-11-06 09:13:48.129465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:11.793 BaseBdev1 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.793 BaseBdev2_malloc 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.793 [2024-11-06 09:13:48.179120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:11.793 [2024-11-06 09:13:48.179222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.793 [2024-11-06 09:13:48.179276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:11.793 [2024-11-06 09:13:48.179296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.793 [2024-11-06 09:13:48.182114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.793 [2024-11-06 09:13:48.182160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:11.793 BaseBdev2 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.793 BaseBdev3_malloc 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.793 [2024-11-06 09:13:48.239795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:11.793 [2024-11-06 09:13:48.239893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.793 [2024-11-06 09:13:48.239940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:11.793 [2024-11-06 09:13:48.239961] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.793 [2024-11-06 09:13:48.242694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.793 [2024-11-06 09:13:48.242744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:11.793 BaseBdev3 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.793 spare_malloc 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.793 spare_delay 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.793 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.794 [2024-11-06 09:13:48.301114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:11.794 [2024-11-06 09:13:48.301183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.794 [2024-11-06 09:13:48.301230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:11.794 [2024-11-06 09:13:48.301250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.794 [2024-11-06 09:13:48.304252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.794 [2024-11-06 09:13:48.304303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:11.794 spare 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.794 [2024-11-06 09:13:48.309297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:11.794 [2024-11-06 09:13:48.311702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:11.794 [2024-11-06 09:13:48.311830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:11.794 [2024-11-06 09:13:48.311968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:11.794 [2024-11-06 09:13:48.311997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:11.794 [2024-11-06 09:13:48.312351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:11.794 [2024-11-06 09:13:48.317650] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:11.794 [2024-11-06 09:13:48.317702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:11.794 [2024-11-06 09:13:48.317964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.794 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.052 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.052 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:12.052 "name": "raid_bdev1", 00:21:12.052 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:12.052 "strip_size_kb": 64, 00:21:12.052 "state": "online", 00:21:12.052 "raid_level": "raid5f", 00:21:12.052 "superblock": false, 00:21:12.052 "num_base_bdevs": 3, 00:21:12.052 "num_base_bdevs_discovered": 3, 00:21:12.052 "num_base_bdevs_operational": 3, 00:21:12.052 "base_bdevs_list": [ 00:21:12.052 { 00:21:12.052 "name": "BaseBdev1", 00:21:12.052 "uuid": "19a259e8-3370-515b-b237-bcd465446918", 00:21:12.052 "is_configured": true, 00:21:12.052 "data_offset": 0, 00:21:12.052 "data_size": 65536 00:21:12.052 }, 00:21:12.052 { 00:21:12.052 "name": "BaseBdev2", 00:21:12.052 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:12.052 "is_configured": true, 00:21:12.053 "data_offset": 0, 00:21:12.053 "data_size": 65536 00:21:12.053 }, 00:21:12.053 { 00:21:12.053 "name": "BaseBdev3", 00:21:12.053 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:12.053 "is_configured": true, 00:21:12.053 "data_offset": 0, 00:21:12.053 "data_size": 65536 00:21:12.053 } 00:21:12.053 ] 00:21:12.053 }' 00:21:12.053 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:12.053 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.310 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:12.310 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.310 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.310 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:12.310 [2024-11-06 09:13:48.832210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:12.567 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:12.568 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:12.568 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:12.568 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:12.568 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:12.568 09:13:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:12.826 [2024-11-06 09:13:49.224179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:12.826 /dev/nbd0 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:12.826 1+0 records in 00:21:12.826 1+0 records out 00:21:12.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491136 s, 8.3 MB/s 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:21:12.826 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:21:13.392 512+0 records in 00:21:13.392 512+0 records out 00:21:13.392 67108864 bytes (67 MB, 64 MiB) copied, 0.482479 s, 139 MB/s 00:21:13.392 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:13.392 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:13.392 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:13.392 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:13.392 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:13.392 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:13.392 09:13:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:13.650 [2024-11-06 09:13:50.109185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.650 [2024-11-06 09:13:50.115096] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.650 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.651 09:13:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:13.651 09:13:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.651 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:13.651 "name": "raid_bdev1", 00:21:13.651 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:13.651 "strip_size_kb": 64, 00:21:13.651 "state": "online", 00:21:13.651 "raid_level": "raid5f", 00:21:13.651 "superblock": false, 00:21:13.651 "num_base_bdevs": 3, 00:21:13.651 "num_base_bdevs_discovered": 2, 00:21:13.651 "num_base_bdevs_operational": 2, 00:21:13.651 "base_bdevs_list": [ 00:21:13.651 { 00:21:13.651 "name": null, 00:21:13.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.651 "is_configured": false, 00:21:13.651 "data_offset": 0, 00:21:13.651 "data_size": 65536 00:21:13.651 }, 00:21:13.651 { 00:21:13.651 "name": "BaseBdev2", 00:21:13.651 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:13.651 "is_configured": true, 00:21:13.651 "data_offset": 0, 00:21:13.651 "data_size": 65536 00:21:13.651 }, 00:21:13.651 { 00:21:13.651 "name": "BaseBdev3", 00:21:13.651 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:13.651 "is_configured": true, 00:21:13.651 "data_offset": 0, 00:21:13.651 "data_size": 65536 00:21:13.651 } 00:21:13.651 ] 00:21:13.651 }' 00:21:13.651 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:13.651 09:13:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.217 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:14.217 09:13:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.217 09:13:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.217 [2024-11-06 09:13:50.635303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:14.217 [2024-11-06 09:13:50.650925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:21:14.217 09:13:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.217 09:13:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:14.217 [2024-11-06 09:13:50.658408] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:15.153 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.153 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.153 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:15.153 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:15.153 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.153 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.153 09:13:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.153 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.153 09:13:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.153 09:13:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.411 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.411 "name": "raid_bdev1", 00:21:15.411 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:15.411 "strip_size_kb": 64, 00:21:15.411 "state": "online", 00:21:15.411 "raid_level": "raid5f", 00:21:15.411 "superblock": false, 00:21:15.411 "num_base_bdevs": 3, 00:21:15.411 "num_base_bdevs_discovered": 3, 00:21:15.411 "num_base_bdevs_operational": 3, 00:21:15.411 "process": { 00:21:15.411 "type": "rebuild", 00:21:15.411 "target": "spare", 00:21:15.411 "progress": { 00:21:15.411 "blocks": 18432, 00:21:15.411 "percent": 14 00:21:15.411 } 00:21:15.411 }, 00:21:15.411 "base_bdevs_list": [ 00:21:15.411 { 00:21:15.411 "name": "spare", 00:21:15.411 "uuid": "a534c3bb-e50b-510b-9f2b-0ae0e4e5073a", 00:21:15.411 "is_configured": true, 00:21:15.411 "data_offset": 0, 00:21:15.411 "data_size": 65536 00:21:15.411 }, 00:21:15.411 { 00:21:15.411 "name": "BaseBdev2", 00:21:15.411 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:15.411 "is_configured": true, 00:21:15.411 "data_offset": 0, 00:21:15.411 "data_size": 65536 00:21:15.411 }, 00:21:15.411 { 00:21:15.411 "name": "BaseBdev3", 00:21:15.411 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:15.411 "is_configured": true, 00:21:15.411 "data_offset": 0, 00:21:15.411 "data_size": 65536 00:21:15.411 } 00:21:15.411 ] 00:21:15.411 }' 00:21:15.411 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.411 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.411 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.412 [2024-11-06 09:13:51.824619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:15.412 [2024-11-06 09:13:51.873837] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:15.412 [2024-11-06 09:13:51.873955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.412 [2024-11-06 09:13:51.873987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:15.412 [2024-11-06 09:13:51.874000] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.412 09:13:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.670 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:15.670 "name": "raid_bdev1", 00:21:15.670 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:15.670 "strip_size_kb": 64, 00:21:15.670 "state": "online", 00:21:15.670 "raid_level": "raid5f", 00:21:15.670 "superblock": false, 00:21:15.670 "num_base_bdevs": 3, 00:21:15.670 "num_base_bdevs_discovered": 2, 00:21:15.670 "num_base_bdevs_operational": 2, 00:21:15.670 "base_bdevs_list": [ 00:21:15.670 { 00:21:15.670 "name": null, 00:21:15.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.670 "is_configured": false, 00:21:15.670 "data_offset": 0, 00:21:15.670 "data_size": 65536 00:21:15.670 }, 00:21:15.670 { 00:21:15.670 "name": "BaseBdev2", 00:21:15.670 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:15.670 "is_configured": true, 00:21:15.670 "data_offset": 0, 00:21:15.670 "data_size": 65536 00:21:15.670 }, 00:21:15.670 { 00:21:15.670 "name": "BaseBdev3", 00:21:15.670 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:15.670 "is_configured": true, 00:21:15.670 "data_offset": 0, 00:21:15.670 "data_size": 65536 00:21:15.670 } 00:21:15.670 ] 00:21:15.670 }' 00:21:15.670 09:13:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:15.670 09:13:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.929 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:15.929 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.929 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:15.929 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:15.929 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.929 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.929 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.929 09:13:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.929 09:13:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:15.929 09:13:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.929 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.929 "name": "raid_bdev1", 00:21:15.929 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:15.929 "strip_size_kb": 64, 00:21:15.929 "state": "online", 00:21:15.929 "raid_level": "raid5f", 00:21:15.929 "superblock": false, 00:21:15.929 "num_base_bdevs": 3, 00:21:15.929 "num_base_bdevs_discovered": 2, 00:21:15.929 "num_base_bdevs_operational": 2, 00:21:15.929 "base_bdevs_list": [ 00:21:15.929 { 00:21:15.929 "name": null, 00:21:15.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.929 "is_configured": false, 00:21:15.929 "data_offset": 0, 00:21:15.929 "data_size": 65536 00:21:15.929 }, 00:21:15.929 { 00:21:15.929 "name": "BaseBdev2", 00:21:15.929 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:15.929 "is_configured": true, 00:21:15.929 "data_offset": 0, 00:21:15.929 "data_size": 65536 00:21:15.929 }, 00:21:15.929 { 00:21:15.929 "name": "BaseBdev3", 00:21:15.929 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:15.929 "is_configured": true, 00:21:15.929 "data_offset": 0, 00:21:15.929 "data_size": 65536 00:21:15.929 } 00:21:15.929 ] 00:21:15.929 }' 00:21:15.929 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.187 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:16.187 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.187 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:16.187 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:16.187 09:13:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.187 09:13:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.187 [2024-11-06 09:13:52.557663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:16.187 [2024-11-06 09:13:52.572131] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:21:16.187 09:13:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.187 09:13:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:16.187 [2024-11-06 09:13:52.579318] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:17.120 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.120 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.120 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.120 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.120 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.120 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.120 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.120 09:13:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.120 09:13:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.120 09:13:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.120 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.120 "name": "raid_bdev1", 00:21:17.120 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:17.120 "strip_size_kb": 64, 00:21:17.120 "state": "online", 00:21:17.120 "raid_level": "raid5f", 00:21:17.120 "superblock": false, 00:21:17.120 "num_base_bdevs": 3, 00:21:17.121 "num_base_bdevs_discovered": 3, 00:21:17.121 "num_base_bdevs_operational": 3, 00:21:17.121 "process": { 00:21:17.121 "type": "rebuild", 00:21:17.121 "target": "spare", 00:21:17.121 "progress": { 00:21:17.121 "blocks": 18432, 00:21:17.121 "percent": 14 00:21:17.121 } 00:21:17.121 }, 00:21:17.121 "base_bdevs_list": [ 00:21:17.121 { 00:21:17.121 "name": "spare", 00:21:17.121 "uuid": "a534c3bb-e50b-510b-9f2b-0ae0e4e5073a", 00:21:17.121 "is_configured": true, 00:21:17.121 "data_offset": 0, 00:21:17.121 "data_size": 65536 00:21:17.121 }, 00:21:17.121 { 00:21:17.121 "name": "BaseBdev2", 00:21:17.121 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:17.121 "is_configured": true, 00:21:17.121 "data_offset": 0, 00:21:17.121 "data_size": 65536 00:21:17.121 }, 00:21:17.121 { 00:21:17.121 "name": "BaseBdev3", 00:21:17.121 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:17.121 "is_configured": true, 00:21:17.121 "data_offset": 0, 00:21:17.121 "data_size": 65536 00:21:17.121 } 00:21:17.121 ] 00:21:17.121 }' 00:21:17.121 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=593 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.380 "name": "raid_bdev1", 00:21:17.380 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:17.380 "strip_size_kb": 64, 00:21:17.380 "state": "online", 00:21:17.380 "raid_level": "raid5f", 00:21:17.380 "superblock": false, 00:21:17.380 "num_base_bdevs": 3, 00:21:17.380 "num_base_bdevs_discovered": 3, 00:21:17.380 "num_base_bdevs_operational": 3, 00:21:17.380 "process": { 00:21:17.380 "type": "rebuild", 00:21:17.380 "target": "spare", 00:21:17.380 "progress": { 00:21:17.380 "blocks": 22528, 00:21:17.380 "percent": 17 00:21:17.380 } 00:21:17.380 }, 00:21:17.380 "base_bdevs_list": [ 00:21:17.380 { 00:21:17.380 "name": "spare", 00:21:17.380 "uuid": "a534c3bb-e50b-510b-9f2b-0ae0e4e5073a", 00:21:17.380 "is_configured": true, 00:21:17.380 "data_offset": 0, 00:21:17.380 "data_size": 65536 00:21:17.380 }, 00:21:17.380 { 00:21:17.380 "name": "BaseBdev2", 00:21:17.380 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:17.380 "is_configured": true, 00:21:17.380 "data_offset": 0, 00:21:17.380 "data_size": 65536 00:21:17.380 }, 00:21:17.380 { 00:21:17.380 "name": "BaseBdev3", 00:21:17.380 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:17.380 "is_configured": true, 00:21:17.380 "data_offset": 0, 00:21:17.380 "data_size": 65536 00:21:17.380 } 00:21:17.380 ] 00:21:17.380 }' 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.380 09:13:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:18.754 09:13:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:18.754 09:13:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.754 09:13:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.754 09:13:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:18.754 09:13:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:18.754 09:13:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.754 09:13:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.754 09:13:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.754 09:13:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.754 09:13:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.754 09:13:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.754 09:13:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.754 "name": "raid_bdev1", 00:21:18.754 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:18.754 "strip_size_kb": 64, 00:21:18.754 "state": "online", 00:21:18.754 "raid_level": "raid5f", 00:21:18.754 "superblock": false, 00:21:18.754 "num_base_bdevs": 3, 00:21:18.754 "num_base_bdevs_discovered": 3, 00:21:18.754 "num_base_bdevs_operational": 3, 00:21:18.754 "process": { 00:21:18.754 "type": "rebuild", 00:21:18.754 "target": "spare", 00:21:18.754 "progress": { 00:21:18.754 "blocks": 47104, 00:21:18.754 "percent": 35 00:21:18.754 } 00:21:18.754 }, 00:21:18.754 "base_bdevs_list": [ 00:21:18.754 { 00:21:18.754 "name": "spare", 00:21:18.754 "uuid": "a534c3bb-e50b-510b-9f2b-0ae0e4e5073a", 00:21:18.755 "is_configured": true, 00:21:18.755 "data_offset": 0, 00:21:18.755 "data_size": 65536 00:21:18.755 }, 00:21:18.755 { 00:21:18.755 "name": "BaseBdev2", 00:21:18.755 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:18.755 "is_configured": true, 00:21:18.755 "data_offset": 0, 00:21:18.755 "data_size": 65536 00:21:18.755 }, 00:21:18.755 { 00:21:18.755 "name": "BaseBdev3", 00:21:18.755 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:18.755 "is_configured": true, 00:21:18.755 "data_offset": 0, 00:21:18.755 "data_size": 65536 00:21:18.755 } 00:21:18.755 ] 00:21:18.755 }' 00:21:18.755 09:13:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.755 09:13:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.755 09:13:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.755 09:13:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.755 09:13:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.691 "name": "raid_bdev1", 00:21:19.691 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:19.691 "strip_size_kb": 64, 00:21:19.691 "state": "online", 00:21:19.691 "raid_level": "raid5f", 00:21:19.691 "superblock": false, 00:21:19.691 "num_base_bdevs": 3, 00:21:19.691 "num_base_bdevs_discovered": 3, 00:21:19.691 "num_base_bdevs_operational": 3, 00:21:19.691 "process": { 00:21:19.691 "type": "rebuild", 00:21:19.691 "target": "spare", 00:21:19.691 "progress": { 00:21:19.691 "blocks": 69632, 00:21:19.691 "percent": 53 00:21:19.691 } 00:21:19.691 }, 00:21:19.691 "base_bdevs_list": [ 00:21:19.691 { 00:21:19.691 "name": "spare", 00:21:19.691 "uuid": "a534c3bb-e50b-510b-9f2b-0ae0e4e5073a", 00:21:19.691 "is_configured": true, 00:21:19.691 "data_offset": 0, 00:21:19.691 "data_size": 65536 00:21:19.691 }, 00:21:19.691 { 00:21:19.691 "name": "BaseBdev2", 00:21:19.691 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:19.691 "is_configured": true, 00:21:19.691 "data_offset": 0, 00:21:19.691 "data_size": 65536 00:21:19.691 }, 00:21:19.691 { 00:21:19.691 "name": "BaseBdev3", 00:21:19.691 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:19.691 "is_configured": true, 00:21:19.691 "data_offset": 0, 00:21:19.691 "data_size": 65536 00:21:19.691 } 00:21:19.691 ] 00:21:19.691 }' 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.691 09:13:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.066 "name": "raid_bdev1", 00:21:21.066 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:21.066 "strip_size_kb": 64, 00:21:21.066 "state": "online", 00:21:21.066 "raid_level": "raid5f", 00:21:21.066 "superblock": false, 00:21:21.066 "num_base_bdevs": 3, 00:21:21.066 "num_base_bdevs_discovered": 3, 00:21:21.066 "num_base_bdevs_operational": 3, 00:21:21.066 "process": { 00:21:21.066 "type": "rebuild", 00:21:21.066 "target": "spare", 00:21:21.066 "progress": { 00:21:21.066 "blocks": 92160, 00:21:21.066 "percent": 70 00:21:21.066 } 00:21:21.066 }, 00:21:21.066 "base_bdevs_list": [ 00:21:21.066 { 00:21:21.066 "name": "spare", 00:21:21.066 "uuid": "a534c3bb-e50b-510b-9f2b-0ae0e4e5073a", 00:21:21.066 "is_configured": true, 00:21:21.066 "data_offset": 0, 00:21:21.066 "data_size": 65536 00:21:21.066 }, 00:21:21.066 { 00:21:21.066 "name": "BaseBdev2", 00:21:21.066 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:21.066 "is_configured": true, 00:21:21.066 "data_offset": 0, 00:21:21.066 "data_size": 65536 00:21:21.066 }, 00:21:21.066 { 00:21:21.066 "name": "BaseBdev3", 00:21:21.066 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:21.066 "is_configured": true, 00:21:21.066 "data_offset": 0, 00:21:21.066 "data_size": 65536 00:21:21.066 } 00:21:21.066 ] 00:21:21.066 }' 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.066 09:13:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:22.000 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:22.000 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:22.000 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:22.000 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:22.000 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:22.000 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:22.000 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:22.000 09:13:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.000 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.000 09:13:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.000 09:13:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.000 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:22.000 "name": "raid_bdev1", 00:21:22.000 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:22.000 "strip_size_kb": 64, 00:21:22.000 "state": "online", 00:21:22.000 "raid_level": "raid5f", 00:21:22.000 "superblock": false, 00:21:22.000 "num_base_bdevs": 3, 00:21:22.000 "num_base_bdevs_discovered": 3, 00:21:22.000 "num_base_bdevs_operational": 3, 00:21:22.000 "process": { 00:21:22.000 "type": "rebuild", 00:21:22.000 "target": "spare", 00:21:22.000 "progress": { 00:21:22.000 "blocks": 116736, 00:21:22.000 "percent": 89 00:21:22.000 } 00:21:22.000 }, 00:21:22.000 "base_bdevs_list": [ 00:21:22.000 { 00:21:22.000 "name": "spare", 00:21:22.000 "uuid": "a534c3bb-e50b-510b-9f2b-0ae0e4e5073a", 00:21:22.000 "is_configured": true, 00:21:22.000 "data_offset": 0, 00:21:22.000 "data_size": 65536 00:21:22.001 }, 00:21:22.001 { 00:21:22.001 "name": "BaseBdev2", 00:21:22.001 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:22.001 "is_configured": true, 00:21:22.001 "data_offset": 0, 00:21:22.001 "data_size": 65536 00:21:22.001 }, 00:21:22.001 { 00:21:22.001 "name": "BaseBdev3", 00:21:22.001 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:22.001 "is_configured": true, 00:21:22.001 "data_offset": 0, 00:21:22.001 "data_size": 65536 00:21:22.001 } 00:21:22.001 ] 00:21:22.001 }' 00:21:22.001 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:22.001 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:22.001 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:22.001 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:22.001 09:13:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:22.567 [2024-11-06 09:13:59.055479] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:22.567 [2024-11-06 09:13:59.055767] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:22.567 [2024-11-06 09:13:59.055857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.133 "name": "raid_bdev1", 00:21:23.133 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:23.133 "strip_size_kb": 64, 00:21:23.133 "state": "online", 00:21:23.133 "raid_level": "raid5f", 00:21:23.133 "superblock": false, 00:21:23.133 "num_base_bdevs": 3, 00:21:23.133 "num_base_bdevs_discovered": 3, 00:21:23.133 "num_base_bdevs_operational": 3, 00:21:23.133 "base_bdevs_list": [ 00:21:23.133 { 00:21:23.133 "name": "spare", 00:21:23.133 "uuid": "a534c3bb-e50b-510b-9f2b-0ae0e4e5073a", 00:21:23.133 "is_configured": true, 00:21:23.133 "data_offset": 0, 00:21:23.133 "data_size": 65536 00:21:23.133 }, 00:21:23.133 { 00:21:23.133 "name": "BaseBdev2", 00:21:23.133 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:23.133 "is_configured": true, 00:21:23.133 "data_offset": 0, 00:21:23.133 "data_size": 65536 00:21:23.133 }, 00:21:23.133 { 00:21:23.133 "name": "BaseBdev3", 00:21:23.133 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:23.133 "is_configured": true, 00:21:23.133 "data_offset": 0, 00:21:23.133 "data_size": 65536 00:21:23.133 } 00:21:23.133 ] 00:21:23.133 }' 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:23.133 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.391 "name": "raid_bdev1", 00:21:23.391 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:23.391 "strip_size_kb": 64, 00:21:23.391 "state": "online", 00:21:23.391 "raid_level": "raid5f", 00:21:23.391 "superblock": false, 00:21:23.391 "num_base_bdevs": 3, 00:21:23.391 "num_base_bdevs_discovered": 3, 00:21:23.391 "num_base_bdevs_operational": 3, 00:21:23.391 "base_bdevs_list": [ 00:21:23.391 { 00:21:23.391 "name": "spare", 00:21:23.391 "uuid": "a534c3bb-e50b-510b-9f2b-0ae0e4e5073a", 00:21:23.391 "is_configured": true, 00:21:23.391 "data_offset": 0, 00:21:23.391 "data_size": 65536 00:21:23.391 }, 00:21:23.391 { 00:21:23.391 "name": "BaseBdev2", 00:21:23.391 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:23.391 "is_configured": true, 00:21:23.391 "data_offset": 0, 00:21:23.391 "data_size": 65536 00:21:23.391 }, 00:21:23.391 { 00:21:23.391 "name": "BaseBdev3", 00:21:23.391 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:23.391 "is_configured": true, 00:21:23.391 "data_offset": 0, 00:21:23.391 "data_size": 65536 00:21:23.391 } 00:21:23.391 ] 00:21:23.391 }' 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.391 09:13:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.392 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.392 "name": "raid_bdev1", 00:21:23.392 "uuid": "8f5d53bf-04ee-4ef3-8ca3-1efad4a60b30", 00:21:23.392 "strip_size_kb": 64, 00:21:23.392 "state": "online", 00:21:23.392 "raid_level": "raid5f", 00:21:23.392 "superblock": false, 00:21:23.392 "num_base_bdevs": 3, 00:21:23.392 "num_base_bdevs_discovered": 3, 00:21:23.392 "num_base_bdevs_operational": 3, 00:21:23.392 "base_bdevs_list": [ 00:21:23.392 { 00:21:23.392 "name": "spare", 00:21:23.392 "uuid": "a534c3bb-e50b-510b-9f2b-0ae0e4e5073a", 00:21:23.392 "is_configured": true, 00:21:23.392 "data_offset": 0, 00:21:23.392 "data_size": 65536 00:21:23.392 }, 00:21:23.392 { 00:21:23.392 "name": "BaseBdev2", 00:21:23.392 "uuid": "f0eb7be6-a18e-56ed-8d1c-34808cd6aa65", 00:21:23.392 "is_configured": true, 00:21:23.392 "data_offset": 0, 00:21:23.392 "data_size": 65536 00:21:23.392 }, 00:21:23.392 { 00:21:23.392 "name": "BaseBdev3", 00:21:23.392 "uuid": "19e6cf11-ceaa-5319-acde-403827b34ce8", 00:21:23.392 "is_configured": true, 00:21:23.392 "data_offset": 0, 00:21:23.392 "data_size": 65536 00:21:23.392 } 00:21:23.392 ] 00:21:23.392 }' 00:21:23.392 09:13:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.392 09:13:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.959 [2024-11-06 09:14:00.402679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:23.959 [2024-11-06 09:14:00.402724] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:23.959 [2024-11-06 09:14:00.402860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:23.959 [2024-11-06 09:14:00.402966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:23.959 [2024-11-06 09:14:00.402991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:23.959 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:24.217 /dev/nbd0 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:24.476 1+0 records in 00:21:24.476 1+0 records out 00:21:24.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680366 s, 6.0 MB/s 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:24.476 09:14:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:24.734 /dev/nbd1 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:24.734 1+0 records in 00:21:24.734 1+0 records out 00:21:24.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442286 s, 9.3 MB/s 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:24.734 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:25.300 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:25.300 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:25.300 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:25.300 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:25.300 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:25.300 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:25.300 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:25.300 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:25.300 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:25.300 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81851 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 81851 ']' 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 81851 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81851 00:21:25.558 killing process with pid 81851 00:21:25.558 Received shutdown signal, test time was about 60.000000 seconds 00:21:25.558 00:21:25.558 Latency(us) 00:21:25.558 [2024-11-06T09:14:02.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:25.558 [2024-11-06T09:14:02.093Z] =================================================================================================================== 00:21:25.558 [2024-11-06T09:14:02.093Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81851' 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 81851 00:21:25.558 [2024-11-06 09:14:01.971533] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:25.558 09:14:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 81851 00:21:25.816 [2024-11-06 09:14:02.326192] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:27.191 ************************************ 00:21:27.191 END TEST raid5f_rebuild_test 00:21:27.191 ************************************ 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:21:27.191 00:21:27.191 real 0m16.412s 00:21:27.191 user 0m20.923s 00:21:27.191 sys 0m2.087s 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.191 09:14:03 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:21:27.191 09:14:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:21:27.191 09:14:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:27.191 09:14:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:27.191 ************************************ 00:21:27.191 START TEST raid5f_rebuild_test_sb 00:21:27.191 ************************************ 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82306 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82306 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82306 ']' 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:27.191 09:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.191 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:27.191 Zero copy mechanism will not be used. 00:21:27.191 [2024-11-06 09:14:03.533041] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:21:27.191 [2024-11-06 09:14:03.533225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82306 ] 00:21:27.191 [2024-11-06 09:14:03.723208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.450 [2024-11-06 09:14:03.877944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.709 [2024-11-06 09:14:04.118913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:27.709 [2024-11-06 09:14:04.119128] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:28.275 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:28.275 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.276 BaseBdev1_malloc 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.276 [2024-11-06 09:14:04.663395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:28.276 [2024-11-06 09:14:04.663479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.276 [2024-11-06 09:14:04.663512] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:28.276 [2024-11-06 09:14:04.663531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.276 [2024-11-06 09:14:04.666323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.276 [2024-11-06 09:14:04.666375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:28.276 BaseBdev1 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.276 BaseBdev2_malloc 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.276 [2024-11-06 09:14:04.711794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:28.276 [2024-11-06 09:14:04.711877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.276 [2024-11-06 09:14:04.711907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:28.276 [2024-11-06 09:14:04.711928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.276 [2024-11-06 09:14:04.714610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.276 [2024-11-06 09:14:04.714660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:28.276 BaseBdev2 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.276 BaseBdev3_malloc 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.276 [2024-11-06 09:14:04.773953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:28.276 [2024-11-06 09:14:04.774153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.276 [2024-11-06 09:14:04.774200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:28.276 [2024-11-06 09:14:04.774227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.276 [2024-11-06 09:14:04.776990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.276 [2024-11-06 09:14:04.777041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:28.276 BaseBdev3 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.276 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.534 spare_malloc 00:21:28.534 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.535 spare_delay 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.535 [2024-11-06 09:14:04.834386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:28.535 [2024-11-06 09:14:04.834452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.535 [2024-11-06 09:14:04.834481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:28.535 [2024-11-06 09:14:04.834500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.535 [2024-11-06 09:14:04.837392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.535 [2024-11-06 09:14:04.837444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:28.535 spare 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.535 [2024-11-06 09:14:04.842487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:28.535 [2024-11-06 09:14:04.844941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:28.535 [2024-11-06 09:14:04.845058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:28.535 [2024-11-06 09:14:04.845397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:28.535 [2024-11-06 09:14:04.845420] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:28.535 [2024-11-06 09:14:04.845801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:28.535 [2024-11-06 09:14:04.851917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:28.535 [2024-11-06 09:14:04.851957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:28.535 [2024-11-06 09:14:04.852254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:28.535 "name": "raid_bdev1", 00:21:28.535 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:28.535 "strip_size_kb": 64, 00:21:28.535 "state": "online", 00:21:28.535 "raid_level": "raid5f", 00:21:28.535 "superblock": true, 00:21:28.535 "num_base_bdevs": 3, 00:21:28.535 "num_base_bdevs_discovered": 3, 00:21:28.535 "num_base_bdevs_operational": 3, 00:21:28.535 "base_bdevs_list": [ 00:21:28.535 { 00:21:28.535 "name": "BaseBdev1", 00:21:28.535 "uuid": "a37be9a0-ced5-5db7-9dc7-69b74e5e86fd", 00:21:28.535 "is_configured": true, 00:21:28.535 "data_offset": 2048, 00:21:28.535 "data_size": 63488 00:21:28.535 }, 00:21:28.535 { 00:21:28.535 "name": "BaseBdev2", 00:21:28.535 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:28.535 "is_configured": true, 00:21:28.535 "data_offset": 2048, 00:21:28.535 "data_size": 63488 00:21:28.535 }, 00:21:28.535 { 00:21:28.535 "name": "BaseBdev3", 00:21:28.535 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:28.535 "is_configured": true, 00:21:28.535 "data_offset": 2048, 00:21:28.535 "data_size": 63488 00:21:28.535 } 00:21:28.535 ] 00:21:28.535 }' 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:28.535 09:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.102 [2024-11-06 09:14:05.374914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:29.102 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:29.360 [2024-11-06 09:14:05.754834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:29.360 /dev/nbd0 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:29.360 1+0 records in 00:21:29.360 1+0 records out 00:21:29.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063569 s, 6.4 MB/s 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:21:29.360 09:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:21:29.925 496+0 records in 00:21:29.925 496+0 records out 00:21:29.925 65011712 bytes (65 MB, 62 MiB) copied, 0.466867 s, 139 MB/s 00:21:29.925 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:29.925 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:29.925 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:29.925 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:29.925 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:29.925 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.925 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:30.204 [2024-11-06 09:14:06.570278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.204 [2024-11-06 09:14:06.600049] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.204 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.205 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.205 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.205 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.205 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.205 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.205 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.205 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.205 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.205 "name": "raid_bdev1", 00:21:30.205 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:30.205 "strip_size_kb": 64, 00:21:30.205 "state": "online", 00:21:30.205 "raid_level": "raid5f", 00:21:30.205 "superblock": true, 00:21:30.205 "num_base_bdevs": 3, 00:21:30.205 "num_base_bdevs_discovered": 2, 00:21:30.205 "num_base_bdevs_operational": 2, 00:21:30.205 "base_bdevs_list": [ 00:21:30.205 { 00:21:30.205 "name": null, 00:21:30.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.205 "is_configured": false, 00:21:30.205 "data_offset": 0, 00:21:30.205 "data_size": 63488 00:21:30.205 }, 00:21:30.205 { 00:21:30.205 "name": "BaseBdev2", 00:21:30.205 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:30.205 "is_configured": true, 00:21:30.205 "data_offset": 2048, 00:21:30.205 "data_size": 63488 00:21:30.205 }, 00:21:30.205 { 00:21:30.205 "name": "BaseBdev3", 00:21:30.205 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:30.205 "is_configured": true, 00:21:30.205 "data_offset": 2048, 00:21:30.205 "data_size": 63488 00:21:30.205 } 00:21:30.205 ] 00:21:30.205 }' 00:21:30.205 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.205 09:14:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.785 09:14:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:30.785 09:14:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.785 09:14:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.785 [2024-11-06 09:14:07.112193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:30.785 [2024-11-06 09:14:07.127738] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:21:30.785 09:14:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.785 09:14:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:30.785 [2024-11-06 09:14:07.135387] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.743 "name": "raid_bdev1", 00:21:31.743 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:31.743 "strip_size_kb": 64, 00:21:31.743 "state": "online", 00:21:31.743 "raid_level": "raid5f", 00:21:31.743 "superblock": true, 00:21:31.743 "num_base_bdevs": 3, 00:21:31.743 "num_base_bdevs_discovered": 3, 00:21:31.743 "num_base_bdevs_operational": 3, 00:21:31.743 "process": { 00:21:31.743 "type": "rebuild", 00:21:31.743 "target": "spare", 00:21:31.743 "progress": { 00:21:31.743 "blocks": 18432, 00:21:31.743 "percent": 14 00:21:31.743 } 00:21:31.743 }, 00:21:31.743 "base_bdevs_list": [ 00:21:31.743 { 00:21:31.743 "name": "spare", 00:21:31.743 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:31.743 "is_configured": true, 00:21:31.743 "data_offset": 2048, 00:21:31.743 "data_size": 63488 00:21:31.743 }, 00:21:31.743 { 00:21:31.743 "name": "BaseBdev2", 00:21:31.743 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:31.743 "is_configured": true, 00:21:31.743 "data_offset": 2048, 00:21:31.743 "data_size": 63488 00:21:31.743 }, 00:21:31.743 { 00:21:31.743 "name": "BaseBdev3", 00:21:31.743 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:31.743 "is_configured": true, 00:21:31.743 "data_offset": 2048, 00:21:31.743 "data_size": 63488 00:21:31.743 } 00:21:31.743 ] 00:21:31.743 }' 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.743 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.001 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.001 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:32.001 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.001 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.001 [2024-11-06 09:14:08.305044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:32.001 [2024-11-06 09:14:08.349378] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:32.001 [2024-11-06 09:14:08.349616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.001 [2024-11-06 09:14:08.349756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:32.002 [2024-11-06 09:14:08.349809] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:32.002 "name": "raid_bdev1", 00:21:32.002 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:32.002 "strip_size_kb": 64, 00:21:32.002 "state": "online", 00:21:32.002 "raid_level": "raid5f", 00:21:32.002 "superblock": true, 00:21:32.002 "num_base_bdevs": 3, 00:21:32.002 "num_base_bdevs_discovered": 2, 00:21:32.002 "num_base_bdevs_operational": 2, 00:21:32.002 "base_bdevs_list": [ 00:21:32.002 { 00:21:32.002 "name": null, 00:21:32.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.002 "is_configured": false, 00:21:32.002 "data_offset": 0, 00:21:32.002 "data_size": 63488 00:21:32.002 }, 00:21:32.002 { 00:21:32.002 "name": "BaseBdev2", 00:21:32.002 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:32.002 "is_configured": true, 00:21:32.002 "data_offset": 2048, 00:21:32.002 "data_size": 63488 00:21:32.002 }, 00:21:32.002 { 00:21:32.002 "name": "BaseBdev3", 00:21:32.002 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:32.002 "is_configured": true, 00:21:32.002 "data_offset": 2048, 00:21:32.002 "data_size": 63488 00:21:32.002 } 00:21:32.002 ] 00:21:32.002 }' 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:32.002 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.568 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:32.568 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:32.568 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:32.568 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:32.568 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:32.568 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:32.568 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.568 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.568 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.568 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.568 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:32.568 "name": "raid_bdev1", 00:21:32.568 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:32.568 "strip_size_kb": 64, 00:21:32.568 "state": "online", 00:21:32.568 "raid_level": "raid5f", 00:21:32.568 "superblock": true, 00:21:32.568 "num_base_bdevs": 3, 00:21:32.568 "num_base_bdevs_discovered": 2, 00:21:32.568 "num_base_bdevs_operational": 2, 00:21:32.568 "base_bdevs_list": [ 00:21:32.568 { 00:21:32.568 "name": null, 00:21:32.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.568 "is_configured": false, 00:21:32.568 "data_offset": 0, 00:21:32.568 "data_size": 63488 00:21:32.568 }, 00:21:32.568 { 00:21:32.568 "name": "BaseBdev2", 00:21:32.568 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:32.568 "is_configured": true, 00:21:32.568 "data_offset": 2048, 00:21:32.568 "data_size": 63488 00:21:32.569 }, 00:21:32.569 { 00:21:32.569 "name": "BaseBdev3", 00:21:32.569 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:32.569 "is_configured": true, 00:21:32.569 "data_offset": 2048, 00:21:32.569 "data_size": 63488 00:21:32.569 } 00:21:32.569 ] 00:21:32.569 }' 00:21:32.569 09:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:32.569 09:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:32.569 09:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:32.569 09:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:32.569 09:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:32.569 09:14:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:32.569 09:14:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:32.569 [2024-11-06 09:14:09.064878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:32.569 [2024-11-06 09:14:09.079348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:21:32.569 09:14:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:32.569 09:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:32.569 [2024-11-06 09:14:09.086539] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.946 "name": "raid_bdev1", 00:21:33.946 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:33.946 "strip_size_kb": 64, 00:21:33.946 "state": "online", 00:21:33.946 "raid_level": "raid5f", 00:21:33.946 "superblock": true, 00:21:33.946 "num_base_bdevs": 3, 00:21:33.946 "num_base_bdevs_discovered": 3, 00:21:33.946 "num_base_bdevs_operational": 3, 00:21:33.946 "process": { 00:21:33.946 "type": "rebuild", 00:21:33.946 "target": "spare", 00:21:33.946 "progress": { 00:21:33.946 "blocks": 18432, 00:21:33.946 "percent": 14 00:21:33.946 } 00:21:33.946 }, 00:21:33.946 "base_bdevs_list": [ 00:21:33.946 { 00:21:33.946 "name": "spare", 00:21:33.946 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:33.946 "is_configured": true, 00:21:33.946 "data_offset": 2048, 00:21:33.946 "data_size": 63488 00:21:33.946 }, 00:21:33.946 { 00:21:33.946 "name": "BaseBdev2", 00:21:33.946 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:33.946 "is_configured": true, 00:21:33.946 "data_offset": 2048, 00:21:33.946 "data_size": 63488 00:21:33.946 }, 00:21:33.946 { 00:21:33.946 "name": "BaseBdev3", 00:21:33.946 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:33.946 "is_configured": true, 00:21:33.946 "data_offset": 2048, 00:21:33.946 "data_size": 63488 00:21:33.946 } 00:21:33.946 ] 00:21:33.946 }' 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.946 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:33.947 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=610 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:33.947 "name": "raid_bdev1", 00:21:33.947 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:33.947 "strip_size_kb": 64, 00:21:33.947 "state": "online", 00:21:33.947 "raid_level": "raid5f", 00:21:33.947 "superblock": true, 00:21:33.947 "num_base_bdevs": 3, 00:21:33.947 "num_base_bdevs_discovered": 3, 00:21:33.947 "num_base_bdevs_operational": 3, 00:21:33.947 "process": { 00:21:33.947 "type": "rebuild", 00:21:33.947 "target": "spare", 00:21:33.947 "progress": { 00:21:33.947 "blocks": 24576, 00:21:33.947 "percent": 19 00:21:33.947 } 00:21:33.947 }, 00:21:33.947 "base_bdevs_list": [ 00:21:33.947 { 00:21:33.947 "name": "spare", 00:21:33.947 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:33.947 "is_configured": true, 00:21:33.947 "data_offset": 2048, 00:21:33.947 "data_size": 63488 00:21:33.947 }, 00:21:33.947 { 00:21:33.947 "name": "BaseBdev2", 00:21:33.947 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:33.947 "is_configured": true, 00:21:33.947 "data_offset": 2048, 00:21:33.947 "data_size": 63488 00:21:33.947 }, 00:21:33.947 { 00:21:33.947 "name": "BaseBdev3", 00:21:33.947 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:33.947 "is_configured": true, 00:21:33.947 "data_offset": 2048, 00:21:33.947 "data_size": 63488 00:21:33.947 } 00:21:33.947 ] 00:21:33.947 }' 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.947 09:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:35.320 "name": "raid_bdev1", 00:21:35.320 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:35.320 "strip_size_kb": 64, 00:21:35.320 "state": "online", 00:21:35.320 "raid_level": "raid5f", 00:21:35.320 "superblock": true, 00:21:35.320 "num_base_bdevs": 3, 00:21:35.320 "num_base_bdevs_discovered": 3, 00:21:35.320 "num_base_bdevs_operational": 3, 00:21:35.320 "process": { 00:21:35.320 "type": "rebuild", 00:21:35.320 "target": "spare", 00:21:35.320 "progress": { 00:21:35.320 "blocks": 47104, 00:21:35.320 "percent": 37 00:21:35.320 } 00:21:35.320 }, 00:21:35.320 "base_bdevs_list": [ 00:21:35.320 { 00:21:35.320 "name": "spare", 00:21:35.320 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:35.320 "is_configured": true, 00:21:35.320 "data_offset": 2048, 00:21:35.320 "data_size": 63488 00:21:35.320 }, 00:21:35.320 { 00:21:35.320 "name": "BaseBdev2", 00:21:35.320 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:35.320 "is_configured": true, 00:21:35.320 "data_offset": 2048, 00:21:35.320 "data_size": 63488 00:21:35.320 }, 00:21:35.320 { 00:21:35.320 "name": "BaseBdev3", 00:21:35.320 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:35.320 "is_configured": true, 00:21:35.320 "data_offset": 2048, 00:21:35.320 "data_size": 63488 00:21:35.320 } 00:21:35.320 ] 00:21:35.320 }' 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:35.320 09:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:36.256 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:36.256 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:36.256 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:36.256 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:36.256 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:36.256 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:36.256 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.256 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.256 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.256 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.257 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.257 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:36.257 "name": "raid_bdev1", 00:21:36.257 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:36.257 "strip_size_kb": 64, 00:21:36.257 "state": "online", 00:21:36.257 "raid_level": "raid5f", 00:21:36.257 "superblock": true, 00:21:36.257 "num_base_bdevs": 3, 00:21:36.257 "num_base_bdevs_discovered": 3, 00:21:36.257 "num_base_bdevs_operational": 3, 00:21:36.257 "process": { 00:21:36.257 "type": "rebuild", 00:21:36.257 "target": "spare", 00:21:36.257 "progress": { 00:21:36.257 "blocks": 71680, 00:21:36.257 "percent": 56 00:21:36.257 } 00:21:36.257 }, 00:21:36.257 "base_bdevs_list": [ 00:21:36.257 { 00:21:36.257 "name": "spare", 00:21:36.257 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:36.257 "is_configured": true, 00:21:36.257 "data_offset": 2048, 00:21:36.257 "data_size": 63488 00:21:36.257 }, 00:21:36.257 { 00:21:36.257 "name": "BaseBdev2", 00:21:36.257 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:36.257 "is_configured": true, 00:21:36.257 "data_offset": 2048, 00:21:36.257 "data_size": 63488 00:21:36.257 }, 00:21:36.257 { 00:21:36.257 "name": "BaseBdev3", 00:21:36.257 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:36.257 "is_configured": true, 00:21:36.257 "data_offset": 2048, 00:21:36.257 "data_size": 63488 00:21:36.257 } 00:21:36.257 ] 00:21:36.257 }' 00:21:36.257 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:36.257 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:36.257 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:36.515 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:36.515 09:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:37.450 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:37.450 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:37.450 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:37.450 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:37.450 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:37.450 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:37.450 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.450 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.450 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.450 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:37.450 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.450 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:37.450 "name": "raid_bdev1", 00:21:37.451 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:37.451 "strip_size_kb": 64, 00:21:37.451 "state": "online", 00:21:37.451 "raid_level": "raid5f", 00:21:37.451 "superblock": true, 00:21:37.451 "num_base_bdevs": 3, 00:21:37.451 "num_base_bdevs_discovered": 3, 00:21:37.451 "num_base_bdevs_operational": 3, 00:21:37.451 "process": { 00:21:37.451 "type": "rebuild", 00:21:37.451 "target": "spare", 00:21:37.451 "progress": { 00:21:37.451 "blocks": 94208, 00:21:37.451 "percent": 74 00:21:37.451 } 00:21:37.451 }, 00:21:37.451 "base_bdevs_list": [ 00:21:37.451 { 00:21:37.451 "name": "spare", 00:21:37.451 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:37.451 "is_configured": true, 00:21:37.451 "data_offset": 2048, 00:21:37.451 "data_size": 63488 00:21:37.451 }, 00:21:37.451 { 00:21:37.451 "name": "BaseBdev2", 00:21:37.451 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:37.451 "is_configured": true, 00:21:37.451 "data_offset": 2048, 00:21:37.451 "data_size": 63488 00:21:37.451 }, 00:21:37.451 { 00:21:37.451 "name": "BaseBdev3", 00:21:37.451 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:37.451 "is_configured": true, 00:21:37.451 "data_offset": 2048, 00:21:37.451 "data_size": 63488 00:21:37.451 } 00:21:37.451 ] 00:21:37.451 }' 00:21:37.451 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:37.451 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:37.451 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:37.451 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:37.451 09:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:38.826 09:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:38.826 09:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:38.826 09:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:38.826 09:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:38.826 09:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:38.826 09:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:38.826 09:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:38.826 09:14:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:38.826 09:14:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.826 09:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.826 09:14:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:38.826 09:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:38.826 "name": "raid_bdev1", 00:21:38.826 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:38.826 "strip_size_kb": 64, 00:21:38.826 "state": "online", 00:21:38.826 "raid_level": "raid5f", 00:21:38.826 "superblock": true, 00:21:38.826 "num_base_bdevs": 3, 00:21:38.826 "num_base_bdevs_discovered": 3, 00:21:38.826 "num_base_bdevs_operational": 3, 00:21:38.826 "process": { 00:21:38.826 "type": "rebuild", 00:21:38.826 "target": "spare", 00:21:38.826 "progress": { 00:21:38.826 "blocks": 118784, 00:21:38.826 "percent": 93 00:21:38.826 } 00:21:38.826 }, 00:21:38.826 "base_bdevs_list": [ 00:21:38.826 { 00:21:38.826 "name": "spare", 00:21:38.826 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:38.826 "is_configured": true, 00:21:38.826 "data_offset": 2048, 00:21:38.826 "data_size": 63488 00:21:38.826 }, 00:21:38.826 { 00:21:38.826 "name": "BaseBdev2", 00:21:38.826 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:38.826 "is_configured": true, 00:21:38.826 "data_offset": 2048, 00:21:38.826 "data_size": 63488 00:21:38.826 }, 00:21:38.826 { 00:21:38.826 "name": "BaseBdev3", 00:21:38.826 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:38.826 "is_configured": true, 00:21:38.826 "data_offset": 2048, 00:21:38.826 "data_size": 63488 00:21:38.826 } 00:21:38.826 ] 00:21:38.826 }' 00:21:38.826 09:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:38.826 09:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:38.826 09:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:38.826 09:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:38.826 09:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:39.085 [2024-11-06 09:14:15.361957] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:39.085 [2024-11-06 09:14:15.362057] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:39.085 [2024-11-06 09:14:15.362207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.652 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:39.652 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:39.652 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.652 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:39.652 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:39.652 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.652 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.652 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.652 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.652 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.652 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.910 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.910 "name": "raid_bdev1", 00:21:39.910 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:39.910 "strip_size_kb": 64, 00:21:39.911 "state": "online", 00:21:39.911 "raid_level": "raid5f", 00:21:39.911 "superblock": true, 00:21:39.911 "num_base_bdevs": 3, 00:21:39.911 "num_base_bdevs_discovered": 3, 00:21:39.911 "num_base_bdevs_operational": 3, 00:21:39.911 "base_bdevs_list": [ 00:21:39.911 { 00:21:39.911 "name": "spare", 00:21:39.911 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:39.911 "is_configured": true, 00:21:39.911 "data_offset": 2048, 00:21:39.911 "data_size": 63488 00:21:39.911 }, 00:21:39.911 { 00:21:39.911 "name": "BaseBdev2", 00:21:39.911 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:39.911 "is_configured": true, 00:21:39.911 "data_offset": 2048, 00:21:39.911 "data_size": 63488 00:21:39.911 }, 00:21:39.911 { 00:21:39.911 "name": "BaseBdev3", 00:21:39.911 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:39.911 "is_configured": true, 00:21:39.911 "data_offset": 2048, 00:21:39.911 "data_size": 63488 00:21:39.911 } 00:21:39.911 ] 00:21:39.911 }' 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:39.911 "name": "raid_bdev1", 00:21:39.911 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:39.911 "strip_size_kb": 64, 00:21:39.911 "state": "online", 00:21:39.911 "raid_level": "raid5f", 00:21:39.911 "superblock": true, 00:21:39.911 "num_base_bdevs": 3, 00:21:39.911 "num_base_bdevs_discovered": 3, 00:21:39.911 "num_base_bdevs_operational": 3, 00:21:39.911 "base_bdevs_list": [ 00:21:39.911 { 00:21:39.911 "name": "spare", 00:21:39.911 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:39.911 "is_configured": true, 00:21:39.911 "data_offset": 2048, 00:21:39.911 "data_size": 63488 00:21:39.911 }, 00:21:39.911 { 00:21:39.911 "name": "BaseBdev2", 00:21:39.911 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:39.911 "is_configured": true, 00:21:39.911 "data_offset": 2048, 00:21:39.911 "data_size": 63488 00:21:39.911 }, 00:21:39.911 { 00:21:39.911 "name": "BaseBdev3", 00:21:39.911 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:39.911 "is_configured": true, 00:21:39.911 "data_offset": 2048, 00:21:39.911 "data_size": 63488 00:21:39.911 } 00:21:39.911 ] 00:21:39.911 }' 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:39.911 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:40.169 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:40.169 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:40.169 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.169 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:40.169 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:40.169 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:40.169 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:40.169 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.169 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.170 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.170 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.170 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.170 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.170 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.170 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.170 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.170 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:40.170 "name": "raid_bdev1", 00:21:40.170 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:40.170 "strip_size_kb": 64, 00:21:40.170 "state": "online", 00:21:40.170 "raid_level": "raid5f", 00:21:40.170 "superblock": true, 00:21:40.170 "num_base_bdevs": 3, 00:21:40.170 "num_base_bdevs_discovered": 3, 00:21:40.170 "num_base_bdevs_operational": 3, 00:21:40.170 "base_bdevs_list": [ 00:21:40.170 { 00:21:40.170 "name": "spare", 00:21:40.170 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:40.170 "is_configured": true, 00:21:40.170 "data_offset": 2048, 00:21:40.170 "data_size": 63488 00:21:40.170 }, 00:21:40.170 { 00:21:40.170 "name": "BaseBdev2", 00:21:40.170 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:40.170 "is_configured": true, 00:21:40.170 "data_offset": 2048, 00:21:40.170 "data_size": 63488 00:21:40.170 }, 00:21:40.170 { 00:21:40.170 "name": "BaseBdev3", 00:21:40.170 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:40.170 "is_configured": true, 00:21:40.170 "data_offset": 2048, 00:21:40.170 "data_size": 63488 00:21:40.170 } 00:21:40.170 ] 00:21:40.170 }' 00:21:40.170 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:40.170 09:14:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.737 [2024-11-06 09:14:17.017602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:40.737 [2024-11-06 09:14:17.017637] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.737 [2024-11-06 09:14:17.017739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.737 [2024-11-06 09:14:17.017862] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.737 [2024-11-06 09:14:17.017889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:40.737 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:40.997 /dev/nbd0 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:40.997 1+0 records in 00:21:40.997 1+0 records out 00:21:40.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258702 s, 15.8 MB/s 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:40.997 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:41.564 /dev/nbd1 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.564 1+0 records in 00:21:41.564 1+0 records out 00:21:41.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318144 s, 12.9 MB/s 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:41.564 09:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:41.564 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:41.564 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:41.564 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:41.564 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:41.564 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:41.564 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:41.564 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:41.823 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:41.823 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:41.823 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:41.823 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:41.823 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:41.823 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:41.823 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:41.823 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:41.823 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:41.823 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:42.081 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:42.081 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:42.081 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:42.081 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.081 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.082 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:42.082 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:42.082 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.082 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:42.082 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:42.082 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.082 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.341 [2024-11-06 09:14:18.623739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:42.341 [2024-11-06 09:14:18.623809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.341 [2024-11-06 09:14:18.623852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:42.341 [2024-11-06 09:14:18.623871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.341 [2024-11-06 09:14:18.626743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.341 [2024-11-06 09:14:18.626795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:42.341 [2024-11-06 09:14:18.626932] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:42.341 [2024-11-06 09:14:18.627020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:42.341 [2024-11-06 09:14:18.627192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:42.341 [2024-11-06 09:14:18.627350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:42.341 spare 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.341 [2024-11-06 09:14:18.727478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:42.341 [2024-11-06 09:14:18.727597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:42.341 [2024-11-06 09:14:18.728056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:21:42.341 [2024-11-06 09:14:18.733093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:42.341 [2024-11-06 09:14:18.733123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:42.341 [2024-11-06 09:14:18.733382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.341 "name": "raid_bdev1", 00:21:42.341 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:42.341 "strip_size_kb": 64, 00:21:42.341 "state": "online", 00:21:42.341 "raid_level": "raid5f", 00:21:42.341 "superblock": true, 00:21:42.341 "num_base_bdevs": 3, 00:21:42.341 "num_base_bdevs_discovered": 3, 00:21:42.341 "num_base_bdevs_operational": 3, 00:21:42.341 "base_bdevs_list": [ 00:21:42.341 { 00:21:42.341 "name": "spare", 00:21:42.341 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:42.341 "is_configured": true, 00:21:42.341 "data_offset": 2048, 00:21:42.341 "data_size": 63488 00:21:42.341 }, 00:21:42.341 { 00:21:42.341 "name": "BaseBdev2", 00:21:42.341 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:42.341 "is_configured": true, 00:21:42.341 "data_offset": 2048, 00:21:42.341 "data_size": 63488 00:21:42.341 }, 00:21:42.341 { 00:21:42.341 "name": "BaseBdev3", 00:21:42.341 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:42.341 "is_configured": true, 00:21:42.341 "data_offset": 2048, 00:21:42.341 "data_size": 63488 00:21:42.341 } 00:21:42.341 ] 00:21:42.341 }' 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.341 09:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:42.908 "name": "raid_bdev1", 00:21:42.908 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:42.908 "strip_size_kb": 64, 00:21:42.908 "state": "online", 00:21:42.908 "raid_level": "raid5f", 00:21:42.908 "superblock": true, 00:21:42.908 "num_base_bdevs": 3, 00:21:42.908 "num_base_bdevs_discovered": 3, 00:21:42.908 "num_base_bdevs_operational": 3, 00:21:42.908 "base_bdevs_list": [ 00:21:42.908 { 00:21:42.908 "name": "spare", 00:21:42.908 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:42.908 "is_configured": true, 00:21:42.908 "data_offset": 2048, 00:21:42.908 "data_size": 63488 00:21:42.908 }, 00:21:42.908 { 00:21:42.908 "name": "BaseBdev2", 00:21:42.908 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:42.908 "is_configured": true, 00:21:42.908 "data_offset": 2048, 00:21:42.908 "data_size": 63488 00:21:42.908 }, 00:21:42.908 { 00:21:42.908 "name": "BaseBdev3", 00:21:42.908 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:42.908 "is_configured": true, 00:21:42.908 "data_offset": 2048, 00:21:42.908 "data_size": 63488 00:21:42.908 } 00:21:42.908 ] 00:21:42.908 }' 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.908 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.166 [2024-11-06 09:14:19.487123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.166 "name": "raid_bdev1", 00:21:43.166 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:43.166 "strip_size_kb": 64, 00:21:43.166 "state": "online", 00:21:43.166 "raid_level": "raid5f", 00:21:43.166 "superblock": true, 00:21:43.166 "num_base_bdevs": 3, 00:21:43.166 "num_base_bdevs_discovered": 2, 00:21:43.166 "num_base_bdevs_operational": 2, 00:21:43.166 "base_bdevs_list": [ 00:21:43.166 { 00:21:43.166 "name": null, 00:21:43.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.166 "is_configured": false, 00:21:43.166 "data_offset": 0, 00:21:43.166 "data_size": 63488 00:21:43.166 }, 00:21:43.166 { 00:21:43.166 "name": "BaseBdev2", 00:21:43.166 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:43.166 "is_configured": true, 00:21:43.166 "data_offset": 2048, 00:21:43.166 "data_size": 63488 00:21:43.166 }, 00:21:43.166 { 00:21:43.166 "name": "BaseBdev3", 00:21:43.166 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:43.166 "is_configured": true, 00:21:43.166 "data_offset": 2048, 00:21:43.166 "data_size": 63488 00:21:43.166 } 00:21:43.166 ] 00:21:43.166 }' 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.166 09:14:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.732 09:14:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:43.732 09:14:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.732 09:14:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.732 [2024-11-06 09:14:20.015320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.732 [2024-11-06 09:14:20.015565] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:43.732 [2024-11-06 09:14:20.015592] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:43.732 [2024-11-06 09:14:20.015670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.732 [2024-11-06 09:14:20.030143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:21:43.732 09:14:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.732 09:14:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:43.732 [2024-11-06 09:14:20.037577] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:44.667 "name": "raid_bdev1", 00:21:44.667 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:44.667 "strip_size_kb": 64, 00:21:44.667 "state": "online", 00:21:44.667 "raid_level": "raid5f", 00:21:44.667 "superblock": true, 00:21:44.667 "num_base_bdevs": 3, 00:21:44.667 "num_base_bdevs_discovered": 3, 00:21:44.667 "num_base_bdevs_operational": 3, 00:21:44.667 "process": { 00:21:44.667 "type": "rebuild", 00:21:44.667 "target": "spare", 00:21:44.667 "progress": { 00:21:44.667 "blocks": 18432, 00:21:44.667 "percent": 14 00:21:44.667 } 00:21:44.667 }, 00:21:44.667 "base_bdevs_list": [ 00:21:44.667 { 00:21:44.667 "name": "spare", 00:21:44.667 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:44.667 "is_configured": true, 00:21:44.667 "data_offset": 2048, 00:21:44.667 "data_size": 63488 00:21:44.667 }, 00:21:44.667 { 00:21:44.667 "name": "BaseBdev2", 00:21:44.667 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:44.667 "is_configured": true, 00:21:44.667 "data_offset": 2048, 00:21:44.667 "data_size": 63488 00:21:44.667 }, 00:21:44.667 { 00:21:44.667 "name": "BaseBdev3", 00:21:44.667 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:44.667 "is_configured": true, 00:21:44.667 "data_offset": 2048, 00:21:44.667 "data_size": 63488 00:21:44.667 } 00:21:44.667 ] 00:21:44.667 }' 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.667 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.926 [2024-11-06 09:14:21.207397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.926 [2024-11-06 09:14:21.251562] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:44.926 [2024-11-06 09:14:21.251668] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:44.926 [2024-11-06 09:14:21.251692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:44.926 [2024-11-06 09:14:21.251705] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:44.926 "name": "raid_bdev1", 00:21:44.926 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:44.926 "strip_size_kb": 64, 00:21:44.926 "state": "online", 00:21:44.926 "raid_level": "raid5f", 00:21:44.926 "superblock": true, 00:21:44.926 "num_base_bdevs": 3, 00:21:44.926 "num_base_bdevs_discovered": 2, 00:21:44.926 "num_base_bdevs_operational": 2, 00:21:44.926 "base_bdevs_list": [ 00:21:44.926 { 00:21:44.926 "name": null, 00:21:44.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.926 "is_configured": false, 00:21:44.926 "data_offset": 0, 00:21:44.926 "data_size": 63488 00:21:44.926 }, 00:21:44.926 { 00:21:44.926 "name": "BaseBdev2", 00:21:44.926 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:44.926 "is_configured": true, 00:21:44.926 "data_offset": 2048, 00:21:44.926 "data_size": 63488 00:21:44.926 }, 00:21:44.926 { 00:21:44.926 "name": "BaseBdev3", 00:21:44.926 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:44.926 "is_configured": true, 00:21:44.926 "data_offset": 2048, 00:21:44.926 "data_size": 63488 00:21:44.926 } 00:21:44.926 ] 00:21:44.926 }' 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:44.926 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.525 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:45.525 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.525 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:45.525 [2024-11-06 09:14:21.786755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:45.525 [2024-11-06 09:14:21.786850] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.525 [2024-11-06 09:14:21.786883] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:21:45.525 [2024-11-06 09:14:21.786904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.525 [2024-11-06 09:14:21.787500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.525 [2024-11-06 09:14:21.787548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:45.525 [2024-11-06 09:14:21.787666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:45.525 [2024-11-06 09:14:21.787694] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:45.525 [2024-11-06 09:14:21.787708] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:45.525 [2024-11-06 09:14:21.787740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:45.525 [2024-11-06 09:14:21.802117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:21:45.525 spare 00:21:45.525 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.525 09:14:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:45.525 [2024-11-06 09:14:21.809189] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:46.460 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:46.460 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:46.460 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:46.460 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:46.460 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:46.460 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.460 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.460 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.460 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.461 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.461 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:46.461 "name": "raid_bdev1", 00:21:46.461 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:46.461 "strip_size_kb": 64, 00:21:46.461 "state": "online", 00:21:46.461 "raid_level": "raid5f", 00:21:46.461 "superblock": true, 00:21:46.461 "num_base_bdevs": 3, 00:21:46.461 "num_base_bdevs_discovered": 3, 00:21:46.461 "num_base_bdevs_operational": 3, 00:21:46.461 "process": { 00:21:46.461 "type": "rebuild", 00:21:46.461 "target": "spare", 00:21:46.461 "progress": { 00:21:46.461 "blocks": 18432, 00:21:46.461 "percent": 14 00:21:46.461 } 00:21:46.461 }, 00:21:46.461 "base_bdevs_list": [ 00:21:46.461 { 00:21:46.461 "name": "spare", 00:21:46.461 "uuid": "eba40edc-ab30-5348-aede-438e47bf31d4", 00:21:46.461 "is_configured": true, 00:21:46.461 "data_offset": 2048, 00:21:46.461 "data_size": 63488 00:21:46.461 }, 00:21:46.461 { 00:21:46.461 "name": "BaseBdev2", 00:21:46.461 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:46.461 "is_configured": true, 00:21:46.461 "data_offset": 2048, 00:21:46.461 "data_size": 63488 00:21:46.461 }, 00:21:46.461 { 00:21:46.461 "name": "BaseBdev3", 00:21:46.461 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:46.461 "is_configured": true, 00:21:46.461 "data_offset": 2048, 00:21:46.461 "data_size": 63488 00:21:46.461 } 00:21:46.461 ] 00:21:46.461 }' 00:21:46.461 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:46.461 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:46.461 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:46.461 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:46.461 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:46.461 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.461 09:14:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.461 [2024-11-06 09:14:22.971377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:46.720 [2024-11-06 09:14:23.023676] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:46.720 [2024-11-06 09:14:23.023753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.720 [2024-11-06 09:14:23.023781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:46.720 [2024-11-06 09:14:23.023793] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.720 "name": "raid_bdev1", 00:21:46.720 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:46.720 "strip_size_kb": 64, 00:21:46.720 "state": "online", 00:21:46.720 "raid_level": "raid5f", 00:21:46.720 "superblock": true, 00:21:46.720 "num_base_bdevs": 3, 00:21:46.720 "num_base_bdevs_discovered": 2, 00:21:46.720 "num_base_bdevs_operational": 2, 00:21:46.720 "base_bdevs_list": [ 00:21:46.720 { 00:21:46.720 "name": null, 00:21:46.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.720 "is_configured": false, 00:21:46.720 "data_offset": 0, 00:21:46.720 "data_size": 63488 00:21:46.720 }, 00:21:46.720 { 00:21:46.720 "name": "BaseBdev2", 00:21:46.720 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:46.720 "is_configured": true, 00:21:46.720 "data_offset": 2048, 00:21:46.720 "data_size": 63488 00:21:46.720 }, 00:21:46.720 { 00:21:46.720 "name": "BaseBdev3", 00:21:46.720 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:46.720 "is_configured": true, 00:21:46.720 "data_offset": 2048, 00:21:46.720 "data_size": 63488 00:21:46.720 } 00:21:46.720 ] 00:21:46.720 }' 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.720 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:47.287 "name": "raid_bdev1", 00:21:47.287 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:47.287 "strip_size_kb": 64, 00:21:47.287 "state": "online", 00:21:47.287 "raid_level": "raid5f", 00:21:47.287 "superblock": true, 00:21:47.287 "num_base_bdevs": 3, 00:21:47.287 "num_base_bdevs_discovered": 2, 00:21:47.287 "num_base_bdevs_operational": 2, 00:21:47.287 "base_bdevs_list": [ 00:21:47.287 { 00:21:47.287 "name": null, 00:21:47.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.287 "is_configured": false, 00:21:47.287 "data_offset": 0, 00:21:47.287 "data_size": 63488 00:21:47.287 }, 00:21:47.287 { 00:21:47.287 "name": "BaseBdev2", 00:21:47.287 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:47.287 "is_configured": true, 00:21:47.287 "data_offset": 2048, 00:21:47.287 "data_size": 63488 00:21:47.287 }, 00:21:47.287 { 00:21:47.287 "name": "BaseBdev3", 00:21:47.287 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:47.287 "is_configured": true, 00:21:47.287 "data_offset": 2048, 00:21:47.287 "data_size": 63488 00:21:47.287 } 00:21:47.287 ] 00:21:47.287 }' 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:47.287 [2024-11-06 09:14:23.730875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:47.287 [2024-11-06 09:14:23.730942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.287 [2024-11-06 09:14:23.730976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:21:47.287 [2024-11-06 09:14:23.730992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.287 [2024-11-06 09:14:23.731533] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.287 [2024-11-06 09:14:23.731573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:47.287 [2024-11-06 09:14:23.731687] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:47.287 [2024-11-06 09:14:23.731708] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:47.287 [2024-11-06 09:14:23.731734] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:47.287 [2024-11-06 09:14:23.731750] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:47.287 BaseBdev1 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.287 09:14:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.224 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.482 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.482 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.482 "name": "raid_bdev1", 00:21:48.482 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:48.482 "strip_size_kb": 64, 00:21:48.482 "state": "online", 00:21:48.482 "raid_level": "raid5f", 00:21:48.482 "superblock": true, 00:21:48.482 "num_base_bdevs": 3, 00:21:48.482 "num_base_bdevs_discovered": 2, 00:21:48.482 "num_base_bdevs_operational": 2, 00:21:48.482 "base_bdevs_list": [ 00:21:48.482 { 00:21:48.482 "name": null, 00:21:48.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.482 "is_configured": false, 00:21:48.482 "data_offset": 0, 00:21:48.482 "data_size": 63488 00:21:48.482 }, 00:21:48.482 { 00:21:48.482 "name": "BaseBdev2", 00:21:48.482 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:48.482 "is_configured": true, 00:21:48.482 "data_offset": 2048, 00:21:48.482 "data_size": 63488 00:21:48.482 }, 00:21:48.482 { 00:21:48.482 "name": "BaseBdev3", 00:21:48.482 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:48.482 "is_configured": true, 00:21:48.482 "data_offset": 2048, 00:21:48.482 "data_size": 63488 00:21:48.482 } 00:21:48.482 ] 00:21:48.482 }' 00:21:48.482 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.482 09:14:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.740 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:48.740 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:48.740 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:48.740 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:48.740 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:48.740 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.740 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.740 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.740 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.741 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.741 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:48.741 "name": "raid_bdev1", 00:21:48.741 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:48.741 "strip_size_kb": 64, 00:21:48.741 "state": "online", 00:21:48.741 "raid_level": "raid5f", 00:21:48.741 "superblock": true, 00:21:48.741 "num_base_bdevs": 3, 00:21:48.741 "num_base_bdevs_discovered": 2, 00:21:48.741 "num_base_bdevs_operational": 2, 00:21:48.741 "base_bdevs_list": [ 00:21:48.741 { 00:21:48.741 "name": null, 00:21:48.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.741 "is_configured": false, 00:21:48.741 "data_offset": 0, 00:21:48.741 "data_size": 63488 00:21:48.741 }, 00:21:48.741 { 00:21:48.741 "name": "BaseBdev2", 00:21:48.741 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:48.741 "is_configured": true, 00:21:48.741 "data_offset": 2048, 00:21:48.741 "data_size": 63488 00:21:48.741 }, 00:21:48.741 { 00:21:48.741 "name": "BaseBdev3", 00:21:48.741 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:48.741 "is_configured": true, 00:21:48.741 "data_offset": 2048, 00:21:48.741 "data_size": 63488 00:21:48.741 } 00:21:48.741 ] 00:21:48.741 }' 00:21:48.741 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:48.999 [2024-11-06 09:14:25.375413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:48.999 [2024-11-06 09:14:25.375612] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:48.999 [2024-11-06 09:14:25.375637] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:48.999 request: 00:21:48.999 { 00:21:48.999 "base_bdev": "BaseBdev1", 00:21:48.999 "raid_bdev": "raid_bdev1", 00:21:48.999 "method": "bdev_raid_add_base_bdev", 00:21:48.999 "req_id": 1 00:21:48.999 } 00:21:48.999 Got JSON-RPC error response 00:21:48.999 response: 00:21:48.999 { 00:21:48.999 "code": -22, 00:21:48.999 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:48.999 } 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:48.999 09:14:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:49.936 "name": "raid_bdev1", 00:21:49.936 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:49.936 "strip_size_kb": 64, 00:21:49.936 "state": "online", 00:21:49.936 "raid_level": "raid5f", 00:21:49.936 "superblock": true, 00:21:49.936 "num_base_bdevs": 3, 00:21:49.936 "num_base_bdevs_discovered": 2, 00:21:49.936 "num_base_bdevs_operational": 2, 00:21:49.936 "base_bdevs_list": [ 00:21:49.936 { 00:21:49.936 "name": null, 00:21:49.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.936 "is_configured": false, 00:21:49.936 "data_offset": 0, 00:21:49.936 "data_size": 63488 00:21:49.936 }, 00:21:49.936 { 00:21:49.936 "name": "BaseBdev2", 00:21:49.936 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:49.936 "is_configured": true, 00:21:49.936 "data_offset": 2048, 00:21:49.936 "data_size": 63488 00:21:49.936 }, 00:21:49.936 { 00:21:49.936 "name": "BaseBdev3", 00:21:49.936 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:49.936 "is_configured": true, 00:21:49.936 "data_offset": 2048, 00:21:49.936 "data_size": 63488 00:21:49.936 } 00:21:49.936 ] 00:21:49.936 }' 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:49.936 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.516 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:50.516 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:50.516 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:50.516 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:50.516 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:50.516 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.516 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.516 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.516 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.516 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.516 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:50.516 "name": "raid_bdev1", 00:21:50.516 "uuid": "96e254e6-a808-4e1e-9e56-0c602338f635", 00:21:50.516 "strip_size_kb": 64, 00:21:50.516 "state": "online", 00:21:50.516 "raid_level": "raid5f", 00:21:50.516 "superblock": true, 00:21:50.516 "num_base_bdevs": 3, 00:21:50.516 "num_base_bdevs_discovered": 2, 00:21:50.516 "num_base_bdevs_operational": 2, 00:21:50.516 "base_bdevs_list": [ 00:21:50.516 { 00:21:50.516 "name": null, 00:21:50.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.516 "is_configured": false, 00:21:50.516 "data_offset": 0, 00:21:50.516 "data_size": 63488 00:21:50.516 }, 00:21:50.516 { 00:21:50.516 "name": "BaseBdev2", 00:21:50.516 "uuid": "ecf6222c-3cd6-5c62-b669-aaaa6ead696e", 00:21:50.516 "is_configured": true, 00:21:50.516 "data_offset": 2048, 00:21:50.516 "data_size": 63488 00:21:50.516 }, 00:21:50.516 { 00:21:50.516 "name": "BaseBdev3", 00:21:50.516 "uuid": "72929f50-ffa0-52af-a682-212b5091366c", 00:21:50.516 "is_configured": true, 00:21:50.516 "data_offset": 2048, 00:21:50.516 "data_size": 63488 00:21:50.516 } 00:21:50.516 ] 00:21:50.516 }' 00:21:50.516 09:14:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:50.516 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:50.516 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:50.774 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:50.774 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82306 00:21:50.774 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82306 ']' 00:21:50.774 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 82306 00:21:50.774 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:21:50.774 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:50.774 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82306 00:21:50.774 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:50.774 killing process with pid 82306 00:21:50.774 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:50.774 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82306' 00:21:50.774 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 82306 00:21:50.774 Received shutdown signal, test time was about 60.000000 seconds 00:21:50.774 00:21:50.774 Latency(us) 00:21:50.774 [2024-11-06T09:14:27.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.774 [2024-11-06T09:14:27.309Z] =================================================================================================================== 00:21:50.774 [2024-11-06T09:14:27.309Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:50.774 [2024-11-06 09:14:27.103764] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:50.774 09:14:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 82306 00:21:50.774 [2024-11-06 09:14:27.103945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:50.774 [2024-11-06 09:14:27.104027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:50.774 [2024-11-06 09:14:27.104047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:51.032 [2024-11-06 09:14:27.465050] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:51.967 09:14:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:51.967 00:21:51.967 real 0m25.067s 00:21:51.967 user 0m33.519s 00:21:51.967 sys 0m2.654s 00:21:51.967 09:14:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:51.967 09:14:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:51.967 ************************************ 00:21:51.967 END TEST raid5f_rebuild_test_sb 00:21:51.967 ************************************ 00:21:52.226 09:14:28 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:21:52.226 09:14:28 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:21:52.226 09:14:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:21:52.226 09:14:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:52.226 09:14:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:52.226 ************************************ 00:21:52.226 START TEST raid5f_state_function_test 00:21:52.226 ************************************ 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83065 00:21:52.226 Process raid pid: 83065 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83065' 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83065 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 83065 ']' 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:52.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:52.226 09:14:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.226 [2024-11-06 09:14:28.658066] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:21:52.226 [2024-11-06 09:14:28.658238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.484 [2024-11-06 09:14:28.839059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.484 [2024-11-06 09:14:28.965042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.742 [2024-11-06 09:14:29.162757] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:52.742 [2024-11-06 09:14:29.162826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.306 [2024-11-06 09:14:29.648794] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:53.306 [2024-11-06 09:14:29.648902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:53.306 [2024-11-06 09:14:29.648929] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:53.306 [2024-11-06 09:14:29.648946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:53.306 [2024-11-06 09:14:29.648956] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:53.306 [2024-11-06 09:14:29.648970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:53.306 [2024-11-06 09:14:29.648980] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:53.306 [2024-11-06 09:14:29.648994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.306 "name": "Existed_Raid", 00:21:53.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.306 "strip_size_kb": 64, 00:21:53.306 "state": "configuring", 00:21:53.306 "raid_level": "raid5f", 00:21:53.306 "superblock": false, 00:21:53.306 "num_base_bdevs": 4, 00:21:53.306 "num_base_bdevs_discovered": 0, 00:21:53.306 "num_base_bdevs_operational": 4, 00:21:53.306 "base_bdevs_list": [ 00:21:53.306 { 00:21:53.306 "name": "BaseBdev1", 00:21:53.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.306 "is_configured": false, 00:21:53.306 "data_offset": 0, 00:21:53.306 "data_size": 0 00:21:53.306 }, 00:21:53.306 { 00:21:53.306 "name": "BaseBdev2", 00:21:53.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.306 "is_configured": false, 00:21:53.306 "data_offset": 0, 00:21:53.306 "data_size": 0 00:21:53.306 }, 00:21:53.306 { 00:21:53.306 "name": "BaseBdev3", 00:21:53.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.306 "is_configured": false, 00:21:53.306 "data_offset": 0, 00:21:53.306 "data_size": 0 00:21:53.306 }, 00:21:53.306 { 00:21:53.306 "name": "BaseBdev4", 00:21:53.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.306 "is_configured": false, 00:21:53.306 "data_offset": 0, 00:21:53.306 "data_size": 0 00:21:53.306 } 00:21:53.306 ] 00:21:53.306 }' 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.306 09:14:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.871 [2024-11-06 09:14:30.184899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:53.871 [2024-11-06 09:14:30.184947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.871 [2024-11-06 09:14:30.192869] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:53.871 [2024-11-06 09:14:30.192922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:53.871 [2024-11-06 09:14:30.192937] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:53.871 [2024-11-06 09:14:30.192953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:53.871 [2024-11-06 09:14:30.192962] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:53.871 [2024-11-06 09:14:30.192976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:53.871 [2024-11-06 09:14:30.192986] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:53.871 [2024-11-06 09:14:30.192999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.871 [2024-11-06 09:14:30.238431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:53.871 BaseBdev1 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.871 [ 00:21:53.871 { 00:21:53.871 "name": "BaseBdev1", 00:21:53.871 "aliases": [ 00:21:53.871 "9e56c32a-b2f4-4d6e-b032-3b00fd361189" 00:21:53.871 ], 00:21:53.871 "product_name": "Malloc disk", 00:21:53.871 "block_size": 512, 00:21:53.871 "num_blocks": 65536, 00:21:53.871 "uuid": "9e56c32a-b2f4-4d6e-b032-3b00fd361189", 00:21:53.871 "assigned_rate_limits": { 00:21:53.871 "rw_ios_per_sec": 0, 00:21:53.871 "rw_mbytes_per_sec": 0, 00:21:53.871 "r_mbytes_per_sec": 0, 00:21:53.871 "w_mbytes_per_sec": 0 00:21:53.871 }, 00:21:53.871 "claimed": true, 00:21:53.871 "claim_type": "exclusive_write", 00:21:53.871 "zoned": false, 00:21:53.871 "supported_io_types": { 00:21:53.871 "read": true, 00:21:53.871 "write": true, 00:21:53.871 "unmap": true, 00:21:53.871 "flush": true, 00:21:53.871 "reset": true, 00:21:53.871 "nvme_admin": false, 00:21:53.871 "nvme_io": false, 00:21:53.871 "nvme_io_md": false, 00:21:53.871 "write_zeroes": true, 00:21:53.871 "zcopy": true, 00:21:53.871 "get_zone_info": false, 00:21:53.871 "zone_management": false, 00:21:53.871 "zone_append": false, 00:21:53.871 "compare": false, 00:21:53.871 "compare_and_write": false, 00:21:53.871 "abort": true, 00:21:53.871 "seek_hole": false, 00:21:53.871 "seek_data": false, 00:21:53.871 "copy": true, 00:21:53.871 "nvme_iov_md": false 00:21:53.871 }, 00:21:53.871 "memory_domains": [ 00:21:53.871 { 00:21:53.871 "dma_device_id": "system", 00:21:53.871 "dma_device_type": 1 00:21:53.871 }, 00:21:53.871 { 00:21:53.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.871 "dma_device_type": 2 00:21:53.871 } 00:21:53.871 ], 00:21:53.871 "driver_specific": {} 00:21:53.871 } 00:21:53.871 ] 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.871 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:53.871 "name": "Existed_Raid", 00:21:53.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.871 "strip_size_kb": 64, 00:21:53.871 "state": "configuring", 00:21:53.871 "raid_level": "raid5f", 00:21:53.871 "superblock": false, 00:21:53.871 "num_base_bdevs": 4, 00:21:53.871 "num_base_bdevs_discovered": 1, 00:21:53.871 "num_base_bdevs_operational": 4, 00:21:53.871 "base_bdevs_list": [ 00:21:53.871 { 00:21:53.871 "name": "BaseBdev1", 00:21:53.871 "uuid": "9e56c32a-b2f4-4d6e-b032-3b00fd361189", 00:21:53.871 "is_configured": true, 00:21:53.871 "data_offset": 0, 00:21:53.871 "data_size": 65536 00:21:53.871 }, 00:21:53.871 { 00:21:53.871 "name": "BaseBdev2", 00:21:53.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.871 "is_configured": false, 00:21:53.871 "data_offset": 0, 00:21:53.871 "data_size": 0 00:21:53.871 }, 00:21:53.871 { 00:21:53.871 "name": "BaseBdev3", 00:21:53.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.871 "is_configured": false, 00:21:53.871 "data_offset": 0, 00:21:53.871 "data_size": 0 00:21:53.871 }, 00:21:53.871 { 00:21:53.872 "name": "BaseBdev4", 00:21:53.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.872 "is_configured": false, 00:21:53.872 "data_offset": 0, 00:21:53.872 "data_size": 0 00:21:53.872 } 00:21:53.872 ] 00:21:53.872 }' 00:21:53.872 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:53.872 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.436 [2024-11-06 09:14:30.814634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:54.436 [2024-11-06 09:14:30.814698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.436 [2024-11-06 09:14:30.822687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.436 [2024-11-06 09:14:30.825082] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:54.436 [2024-11-06 09:14:30.825136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:54.436 [2024-11-06 09:14:30.825152] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:54.436 [2024-11-06 09:14:30.825170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:54.436 [2024-11-06 09:14:30.825181] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:54.436 [2024-11-06 09:14:30.825195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:54.436 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.436 "name": "Existed_Raid", 00:21:54.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.436 "strip_size_kb": 64, 00:21:54.436 "state": "configuring", 00:21:54.436 "raid_level": "raid5f", 00:21:54.436 "superblock": false, 00:21:54.436 "num_base_bdevs": 4, 00:21:54.436 "num_base_bdevs_discovered": 1, 00:21:54.436 "num_base_bdevs_operational": 4, 00:21:54.436 "base_bdevs_list": [ 00:21:54.436 { 00:21:54.436 "name": "BaseBdev1", 00:21:54.436 "uuid": "9e56c32a-b2f4-4d6e-b032-3b00fd361189", 00:21:54.436 "is_configured": true, 00:21:54.436 "data_offset": 0, 00:21:54.436 "data_size": 65536 00:21:54.436 }, 00:21:54.436 { 00:21:54.436 "name": "BaseBdev2", 00:21:54.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.436 "is_configured": false, 00:21:54.436 "data_offset": 0, 00:21:54.436 "data_size": 0 00:21:54.437 }, 00:21:54.437 { 00:21:54.437 "name": "BaseBdev3", 00:21:54.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.437 "is_configured": false, 00:21:54.437 "data_offset": 0, 00:21:54.437 "data_size": 0 00:21:54.437 }, 00:21:54.437 { 00:21:54.437 "name": "BaseBdev4", 00:21:54.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.437 "is_configured": false, 00:21:54.437 "data_offset": 0, 00:21:54.437 "data_size": 0 00:21:54.437 } 00:21:54.437 ] 00:21:54.437 }' 00:21:54.437 09:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.437 09:14:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.002 [2024-11-06 09:14:31.377066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:55.002 BaseBdev2 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.002 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.002 [ 00:21:55.002 { 00:21:55.002 "name": "BaseBdev2", 00:21:55.002 "aliases": [ 00:21:55.002 "bad16d7f-fe7a-4e01-8d16-ee487ef348e2" 00:21:55.002 ], 00:21:55.002 "product_name": "Malloc disk", 00:21:55.002 "block_size": 512, 00:21:55.002 "num_blocks": 65536, 00:21:55.002 "uuid": "bad16d7f-fe7a-4e01-8d16-ee487ef348e2", 00:21:55.002 "assigned_rate_limits": { 00:21:55.002 "rw_ios_per_sec": 0, 00:21:55.002 "rw_mbytes_per_sec": 0, 00:21:55.002 "r_mbytes_per_sec": 0, 00:21:55.002 "w_mbytes_per_sec": 0 00:21:55.002 }, 00:21:55.002 "claimed": true, 00:21:55.002 "claim_type": "exclusive_write", 00:21:55.002 "zoned": false, 00:21:55.002 "supported_io_types": { 00:21:55.002 "read": true, 00:21:55.002 "write": true, 00:21:55.002 "unmap": true, 00:21:55.002 "flush": true, 00:21:55.003 "reset": true, 00:21:55.003 "nvme_admin": false, 00:21:55.003 "nvme_io": false, 00:21:55.003 "nvme_io_md": false, 00:21:55.003 "write_zeroes": true, 00:21:55.003 "zcopy": true, 00:21:55.003 "get_zone_info": false, 00:21:55.003 "zone_management": false, 00:21:55.003 "zone_append": false, 00:21:55.003 "compare": false, 00:21:55.003 "compare_and_write": false, 00:21:55.003 "abort": true, 00:21:55.003 "seek_hole": false, 00:21:55.003 "seek_data": false, 00:21:55.003 "copy": true, 00:21:55.003 "nvme_iov_md": false 00:21:55.003 }, 00:21:55.003 "memory_domains": [ 00:21:55.003 { 00:21:55.003 "dma_device_id": "system", 00:21:55.003 "dma_device_type": 1 00:21:55.003 }, 00:21:55.003 { 00:21:55.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.003 "dma_device_type": 2 00:21:55.003 } 00:21:55.003 ], 00:21:55.003 "driver_specific": {} 00:21:55.003 } 00:21:55.003 ] 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.003 "name": "Existed_Raid", 00:21:55.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.003 "strip_size_kb": 64, 00:21:55.003 "state": "configuring", 00:21:55.003 "raid_level": "raid5f", 00:21:55.003 "superblock": false, 00:21:55.003 "num_base_bdevs": 4, 00:21:55.003 "num_base_bdevs_discovered": 2, 00:21:55.003 "num_base_bdevs_operational": 4, 00:21:55.003 "base_bdevs_list": [ 00:21:55.003 { 00:21:55.003 "name": "BaseBdev1", 00:21:55.003 "uuid": "9e56c32a-b2f4-4d6e-b032-3b00fd361189", 00:21:55.003 "is_configured": true, 00:21:55.003 "data_offset": 0, 00:21:55.003 "data_size": 65536 00:21:55.003 }, 00:21:55.003 { 00:21:55.003 "name": "BaseBdev2", 00:21:55.003 "uuid": "bad16d7f-fe7a-4e01-8d16-ee487ef348e2", 00:21:55.003 "is_configured": true, 00:21:55.003 "data_offset": 0, 00:21:55.003 "data_size": 65536 00:21:55.003 }, 00:21:55.003 { 00:21:55.003 "name": "BaseBdev3", 00:21:55.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.003 "is_configured": false, 00:21:55.003 "data_offset": 0, 00:21:55.003 "data_size": 0 00:21:55.003 }, 00:21:55.003 { 00:21:55.003 "name": "BaseBdev4", 00:21:55.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.003 "is_configured": false, 00:21:55.003 "data_offset": 0, 00:21:55.003 "data_size": 0 00:21:55.003 } 00:21:55.003 ] 00:21:55.003 }' 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.003 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.572 [2024-11-06 09:14:31.992933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:55.572 BaseBdev3 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.572 09:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.572 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.572 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:55.572 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.572 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.572 [ 00:21:55.572 { 00:21:55.572 "name": "BaseBdev3", 00:21:55.572 "aliases": [ 00:21:55.572 "95ceec24-cb8c-427f-8ee1-a10dfb47951d" 00:21:55.572 ], 00:21:55.572 "product_name": "Malloc disk", 00:21:55.572 "block_size": 512, 00:21:55.572 "num_blocks": 65536, 00:21:55.572 "uuid": "95ceec24-cb8c-427f-8ee1-a10dfb47951d", 00:21:55.572 "assigned_rate_limits": { 00:21:55.572 "rw_ios_per_sec": 0, 00:21:55.572 "rw_mbytes_per_sec": 0, 00:21:55.572 "r_mbytes_per_sec": 0, 00:21:55.572 "w_mbytes_per_sec": 0 00:21:55.572 }, 00:21:55.572 "claimed": true, 00:21:55.572 "claim_type": "exclusive_write", 00:21:55.572 "zoned": false, 00:21:55.572 "supported_io_types": { 00:21:55.572 "read": true, 00:21:55.572 "write": true, 00:21:55.572 "unmap": true, 00:21:55.572 "flush": true, 00:21:55.572 "reset": true, 00:21:55.572 "nvme_admin": false, 00:21:55.572 "nvme_io": false, 00:21:55.572 "nvme_io_md": false, 00:21:55.572 "write_zeroes": true, 00:21:55.572 "zcopy": true, 00:21:55.572 "get_zone_info": false, 00:21:55.572 "zone_management": false, 00:21:55.572 "zone_append": false, 00:21:55.572 "compare": false, 00:21:55.572 "compare_and_write": false, 00:21:55.572 "abort": true, 00:21:55.572 "seek_hole": false, 00:21:55.572 "seek_data": false, 00:21:55.572 "copy": true, 00:21:55.572 "nvme_iov_md": false 00:21:55.572 }, 00:21:55.572 "memory_domains": [ 00:21:55.572 { 00:21:55.572 "dma_device_id": "system", 00:21:55.572 "dma_device_type": 1 00:21:55.572 }, 00:21:55.572 { 00:21:55.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.572 "dma_device_type": 2 00:21:55.572 } 00:21:55.573 ], 00:21:55.573 "driver_specific": {} 00:21:55.573 } 00:21:55.573 ] 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:55.573 "name": "Existed_Raid", 00:21:55.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.573 "strip_size_kb": 64, 00:21:55.573 "state": "configuring", 00:21:55.573 "raid_level": "raid5f", 00:21:55.573 "superblock": false, 00:21:55.573 "num_base_bdevs": 4, 00:21:55.573 "num_base_bdevs_discovered": 3, 00:21:55.573 "num_base_bdevs_operational": 4, 00:21:55.573 "base_bdevs_list": [ 00:21:55.573 { 00:21:55.573 "name": "BaseBdev1", 00:21:55.573 "uuid": "9e56c32a-b2f4-4d6e-b032-3b00fd361189", 00:21:55.573 "is_configured": true, 00:21:55.573 "data_offset": 0, 00:21:55.573 "data_size": 65536 00:21:55.573 }, 00:21:55.573 { 00:21:55.573 "name": "BaseBdev2", 00:21:55.573 "uuid": "bad16d7f-fe7a-4e01-8d16-ee487ef348e2", 00:21:55.573 "is_configured": true, 00:21:55.573 "data_offset": 0, 00:21:55.573 "data_size": 65536 00:21:55.573 }, 00:21:55.573 { 00:21:55.573 "name": "BaseBdev3", 00:21:55.573 "uuid": "95ceec24-cb8c-427f-8ee1-a10dfb47951d", 00:21:55.573 "is_configured": true, 00:21:55.573 "data_offset": 0, 00:21:55.573 "data_size": 65536 00:21:55.573 }, 00:21:55.573 { 00:21:55.573 "name": "BaseBdev4", 00:21:55.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.573 "is_configured": false, 00:21:55.573 "data_offset": 0, 00:21:55.573 "data_size": 0 00:21:55.573 } 00:21:55.573 ] 00:21:55.573 }' 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:55.573 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.139 [2024-11-06 09:14:32.587672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:56.139 [2024-11-06 09:14:32.587772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:56.139 [2024-11-06 09:14:32.587788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:21:56.139 [2024-11-06 09:14:32.588137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:56.139 [2024-11-06 09:14:32.595054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:56.139 [2024-11-06 09:14:32.595088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:56.139 [2024-11-06 09:14:32.595408] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:56.139 BaseBdev4 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.139 [ 00:21:56.139 { 00:21:56.139 "name": "BaseBdev4", 00:21:56.139 "aliases": [ 00:21:56.139 "c96c5bda-23f4-4103-8e06-1b5e6a45653a" 00:21:56.139 ], 00:21:56.139 "product_name": "Malloc disk", 00:21:56.139 "block_size": 512, 00:21:56.139 "num_blocks": 65536, 00:21:56.139 "uuid": "c96c5bda-23f4-4103-8e06-1b5e6a45653a", 00:21:56.139 "assigned_rate_limits": { 00:21:56.139 "rw_ios_per_sec": 0, 00:21:56.139 "rw_mbytes_per_sec": 0, 00:21:56.139 "r_mbytes_per_sec": 0, 00:21:56.139 "w_mbytes_per_sec": 0 00:21:56.139 }, 00:21:56.139 "claimed": true, 00:21:56.139 "claim_type": "exclusive_write", 00:21:56.139 "zoned": false, 00:21:56.139 "supported_io_types": { 00:21:56.139 "read": true, 00:21:56.139 "write": true, 00:21:56.139 "unmap": true, 00:21:56.139 "flush": true, 00:21:56.139 "reset": true, 00:21:56.139 "nvme_admin": false, 00:21:56.139 "nvme_io": false, 00:21:56.139 "nvme_io_md": false, 00:21:56.139 "write_zeroes": true, 00:21:56.139 "zcopy": true, 00:21:56.139 "get_zone_info": false, 00:21:56.139 "zone_management": false, 00:21:56.139 "zone_append": false, 00:21:56.139 "compare": false, 00:21:56.139 "compare_and_write": false, 00:21:56.139 "abort": true, 00:21:56.139 "seek_hole": false, 00:21:56.139 "seek_data": false, 00:21:56.139 "copy": true, 00:21:56.139 "nvme_iov_md": false 00:21:56.139 }, 00:21:56.139 "memory_domains": [ 00:21:56.139 { 00:21:56.139 "dma_device_id": "system", 00:21:56.139 "dma_device_type": 1 00:21:56.139 }, 00:21:56.139 { 00:21:56.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.139 "dma_device_type": 2 00:21:56.139 } 00:21:56.139 ], 00:21:56.139 "driver_specific": {} 00:21:56.139 } 00:21:56.139 ] 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:56.139 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:56.140 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.140 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.140 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.140 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.140 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:56.140 "name": "Existed_Raid", 00:21:56.140 "uuid": "eb8d30fb-3246-49af-9069-2724f4236b9f", 00:21:56.140 "strip_size_kb": 64, 00:21:56.140 "state": "online", 00:21:56.140 "raid_level": "raid5f", 00:21:56.140 "superblock": false, 00:21:56.140 "num_base_bdevs": 4, 00:21:56.140 "num_base_bdevs_discovered": 4, 00:21:56.140 "num_base_bdevs_operational": 4, 00:21:56.140 "base_bdevs_list": [ 00:21:56.140 { 00:21:56.140 "name": "BaseBdev1", 00:21:56.140 "uuid": "9e56c32a-b2f4-4d6e-b032-3b00fd361189", 00:21:56.140 "is_configured": true, 00:21:56.140 "data_offset": 0, 00:21:56.140 "data_size": 65536 00:21:56.140 }, 00:21:56.140 { 00:21:56.140 "name": "BaseBdev2", 00:21:56.140 "uuid": "bad16d7f-fe7a-4e01-8d16-ee487ef348e2", 00:21:56.140 "is_configured": true, 00:21:56.140 "data_offset": 0, 00:21:56.140 "data_size": 65536 00:21:56.140 }, 00:21:56.140 { 00:21:56.140 "name": "BaseBdev3", 00:21:56.140 "uuid": "95ceec24-cb8c-427f-8ee1-a10dfb47951d", 00:21:56.140 "is_configured": true, 00:21:56.140 "data_offset": 0, 00:21:56.140 "data_size": 65536 00:21:56.140 }, 00:21:56.140 { 00:21:56.140 "name": "BaseBdev4", 00:21:56.140 "uuid": "c96c5bda-23f4-4103-8e06-1b5e6a45653a", 00:21:56.140 "is_configured": true, 00:21:56.140 "data_offset": 0, 00:21:56.140 "data_size": 65536 00:21:56.140 } 00:21:56.140 ] 00:21:56.140 }' 00:21:56.140 09:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:56.140 09:14:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.706 [2024-11-06 09:14:33.158981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:56.706 "name": "Existed_Raid", 00:21:56.706 "aliases": [ 00:21:56.706 "eb8d30fb-3246-49af-9069-2724f4236b9f" 00:21:56.706 ], 00:21:56.706 "product_name": "Raid Volume", 00:21:56.706 "block_size": 512, 00:21:56.706 "num_blocks": 196608, 00:21:56.706 "uuid": "eb8d30fb-3246-49af-9069-2724f4236b9f", 00:21:56.706 "assigned_rate_limits": { 00:21:56.706 "rw_ios_per_sec": 0, 00:21:56.706 "rw_mbytes_per_sec": 0, 00:21:56.706 "r_mbytes_per_sec": 0, 00:21:56.706 "w_mbytes_per_sec": 0 00:21:56.706 }, 00:21:56.706 "claimed": false, 00:21:56.706 "zoned": false, 00:21:56.706 "supported_io_types": { 00:21:56.706 "read": true, 00:21:56.706 "write": true, 00:21:56.706 "unmap": false, 00:21:56.706 "flush": false, 00:21:56.706 "reset": true, 00:21:56.706 "nvme_admin": false, 00:21:56.706 "nvme_io": false, 00:21:56.706 "nvme_io_md": false, 00:21:56.706 "write_zeroes": true, 00:21:56.706 "zcopy": false, 00:21:56.706 "get_zone_info": false, 00:21:56.706 "zone_management": false, 00:21:56.706 "zone_append": false, 00:21:56.706 "compare": false, 00:21:56.706 "compare_and_write": false, 00:21:56.706 "abort": false, 00:21:56.706 "seek_hole": false, 00:21:56.706 "seek_data": false, 00:21:56.706 "copy": false, 00:21:56.706 "nvme_iov_md": false 00:21:56.706 }, 00:21:56.706 "driver_specific": { 00:21:56.706 "raid": { 00:21:56.706 "uuid": "eb8d30fb-3246-49af-9069-2724f4236b9f", 00:21:56.706 "strip_size_kb": 64, 00:21:56.706 "state": "online", 00:21:56.706 "raid_level": "raid5f", 00:21:56.706 "superblock": false, 00:21:56.706 "num_base_bdevs": 4, 00:21:56.706 "num_base_bdevs_discovered": 4, 00:21:56.706 "num_base_bdevs_operational": 4, 00:21:56.706 "base_bdevs_list": [ 00:21:56.706 { 00:21:56.706 "name": "BaseBdev1", 00:21:56.706 "uuid": "9e56c32a-b2f4-4d6e-b032-3b00fd361189", 00:21:56.706 "is_configured": true, 00:21:56.706 "data_offset": 0, 00:21:56.706 "data_size": 65536 00:21:56.706 }, 00:21:56.706 { 00:21:56.706 "name": "BaseBdev2", 00:21:56.706 "uuid": "bad16d7f-fe7a-4e01-8d16-ee487ef348e2", 00:21:56.706 "is_configured": true, 00:21:56.706 "data_offset": 0, 00:21:56.706 "data_size": 65536 00:21:56.706 }, 00:21:56.706 { 00:21:56.706 "name": "BaseBdev3", 00:21:56.706 "uuid": "95ceec24-cb8c-427f-8ee1-a10dfb47951d", 00:21:56.706 "is_configured": true, 00:21:56.706 "data_offset": 0, 00:21:56.706 "data_size": 65536 00:21:56.706 }, 00:21:56.706 { 00:21:56.706 "name": "BaseBdev4", 00:21:56.706 "uuid": "c96c5bda-23f4-4103-8e06-1b5e6a45653a", 00:21:56.706 "is_configured": true, 00:21:56.706 "data_offset": 0, 00:21:56.706 "data_size": 65536 00:21:56.706 } 00:21:56.706 ] 00:21:56.706 } 00:21:56.706 } 00:21:56.706 }' 00:21:56.706 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:56.965 BaseBdev2 00:21:56.965 BaseBdev3 00:21:56.965 BaseBdev4' 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.965 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.223 [2024-11-06 09:14:33.506925] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.223 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.223 "name": "Existed_Raid", 00:21:57.223 "uuid": "eb8d30fb-3246-49af-9069-2724f4236b9f", 00:21:57.223 "strip_size_kb": 64, 00:21:57.223 "state": "online", 00:21:57.223 "raid_level": "raid5f", 00:21:57.223 "superblock": false, 00:21:57.223 "num_base_bdevs": 4, 00:21:57.223 "num_base_bdevs_discovered": 3, 00:21:57.224 "num_base_bdevs_operational": 3, 00:21:57.224 "base_bdevs_list": [ 00:21:57.224 { 00:21:57.224 "name": null, 00:21:57.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.224 "is_configured": false, 00:21:57.224 "data_offset": 0, 00:21:57.224 "data_size": 65536 00:21:57.224 }, 00:21:57.224 { 00:21:57.224 "name": "BaseBdev2", 00:21:57.224 "uuid": "bad16d7f-fe7a-4e01-8d16-ee487ef348e2", 00:21:57.224 "is_configured": true, 00:21:57.224 "data_offset": 0, 00:21:57.224 "data_size": 65536 00:21:57.224 }, 00:21:57.224 { 00:21:57.224 "name": "BaseBdev3", 00:21:57.224 "uuid": "95ceec24-cb8c-427f-8ee1-a10dfb47951d", 00:21:57.224 "is_configured": true, 00:21:57.224 "data_offset": 0, 00:21:57.224 "data_size": 65536 00:21:57.224 }, 00:21:57.224 { 00:21:57.224 "name": "BaseBdev4", 00:21:57.224 "uuid": "c96c5bda-23f4-4103-8e06-1b5e6a45653a", 00:21:57.224 "is_configured": true, 00:21:57.224 "data_offset": 0, 00:21:57.224 "data_size": 65536 00:21:57.224 } 00:21:57.224 ] 00:21:57.224 }' 00:21:57.224 09:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.224 09:14:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.789 [2024-11-06 09:14:34.158761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:57.789 [2024-11-06 09:14:34.158903] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:57.789 [2024-11-06 09:14:34.246404] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:57.789 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:57.790 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.790 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:57.790 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.790 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.790 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.790 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:57.790 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:57.790 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:21:57.790 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:57.790 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.790 [2024-11-06 09:14:34.306430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.047 [2024-11-06 09:14:34.458656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:58.047 [2024-11-06 09:14:34.458715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.047 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.048 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.048 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.306 BaseBdev2 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.306 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.307 [ 00:21:58.307 { 00:21:58.307 "name": "BaseBdev2", 00:21:58.307 "aliases": [ 00:21:58.307 "7dffcf55-a035-482b-9549-c2aaca38446b" 00:21:58.307 ], 00:21:58.307 "product_name": "Malloc disk", 00:21:58.307 "block_size": 512, 00:21:58.307 "num_blocks": 65536, 00:21:58.307 "uuid": "7dffcf55-a035-482b-9549-c2aaca38446b", 00:21:58.307 "assigned_rate_limits": { 00:21:58.307 "rw_ios_per_sec": 0, 00:21:58.307 "rw_mbytes_per_sec": 0, 00:21:58.307 "r_mbytes_per_sec": 0, 00:21:58.307 "w_mbytes_per_sec": 0 00:21:58.307 }, 00:21:58.307 "claimed": false, 00:21:58.307 "zoned": false, 00:21:58.307 "supported_io_types": { 00:21:58.307 "read": true, 00:21:58.307 "write": true, 00:21:58.307 "unmap": true, 00:21:58.307 "flush": true, 00:21:58.307 "reset": true, 00:21:58.307 "nvme_admin": false, 00:21:58.307 "nvme_io": false, 00:21:58.307 "nvme_io_md": false, 00:21:58.307 "write_zeroes": true, 00:21:58.307 "zcopy": true, 00:21:58.307 "get_zone_info": false, 00:21:58.307 "zone_management": false, 00:21:58.307 "zone_append": false, 00:21:58.307 "compare": false, 00:21:58.307 "compare_and_write": false, 00:21:58.307 "abort": true, 00:21:58.307 "seek_hole": false, 00:21:58.307 "seek_data": false, 00:21:58.307 "copy": true, 00:21:58.307 "nvme_iov_md": false 00:21:58.307 }, 00:21:58.307 "memory_domains": [ 00:21:58.307 { 00:21:58.307 "dma_device_id": "system", 00:21:58.307 "dma_device_type": 1 00:21:58.307 }, 00:21:58.307 { 00:21:58.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.307 "dma_device_type": 2 00:21:58.307 } 00:21:58.307 ], 00:21:58.307 "driver_specific": {} 00:21:58.307 } 00:21:58.307 ] 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.307 BaseBdev3 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.307 [ 00:21:58.307 { 00:21:58.307 "name": "BaseBdev3", 00:21:58.307 "aliases": [ 00:21:58.307 "8a44fec2-b32a-442b-873b-55257d3144f7" 00:21:58.307 ], 00:21:58.307 "product_name": "Malloc disk", 00:21:58.307 "block_size": 512, 00:21:58.307 "num_blocks": 65536, 00:21:58.307 "uuid": "8a44fec2-b32a-442b-873b-55257d3144f7", 00:21:58.307 "assigned_rate_limits": { 00:21:58.307 "rw_ios_per_sec": 0, 00:21:58.307 "rw_mbytes_per_sec": 0, 00:21:58.307 "r_mbytes_per_sec": 0, 00:21:58.307 "w_mbytes_per_sec": 0 00:21:58.307 }, 00:21:58.307 "claimed": false, 00:21:58.307 "zoned": false, 00:21:58.307 "supported_io_types": { 00:21:58.307 "read": true, 00:21:58.307 "write": true, 00:21:58.307 "unmap": true, 00:21:58.307 "flush": true, 00:21:58.307 "reset": true, 00:21:58.307 "nvme_admin": false, 00:21:58.307 "nvme_io": false, 00:21:58.307 "nvme_io_md": false, 00:21:58.307 "write_zeroes": true, 00:21:58.307 "zcopy": true, 00:21:58.307 "get_zone_info": false, 00:21:58.307 "zone_management": false, 00:21:58.307 "zone_append": false, 00:21:58.307 "compare": false, 00:21:58.307 "compare_and_write": false, 00:21:58.307 "abort": true, 00:21:58.307 "seek_hole": false, 00:21:58.307 "seek_data": false, 00:21:58.307 "copy": true, 00:21:58.307 "nvme_iov_md": false 00:21:58.307 }, 00:21:58.307 "memory_domains": [ 00:21:58.307 { 00:21:58.307 "dma_device_id": "system", 00:21:58.307 "dma_device_type": 1 00:21:58.307 }, 00:21:58.307 { 00:21:58.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.307 "dma_device_type": 2 00:21:58.307 } 00:21:58.307 ], 00:21:58.307 "driver_specific": {} 00:21:58.307 } 00:21:58.307 ] 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.307 BaseBdev4 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.307 [ 00:21:58.307 { 00:21:58.307 "name": "BaseBdev4", 00:21:58.307 "aliases": [ 00:21:58.307 "ff251f06-3f16-43db-a1f7-3c06f0c78312" 00:21:58.307 ], 00:21:58.307 "product_name": "Malloc disk", 00:21:58.307 "block_size": 512, 00:21:58.307 "num_blocks": 65536, 00:21:58.307 "uuid": "ff251f06-3f16-43db-a1f7-3c06f0c78312", 00:21:58.307 "assigned_rate_limits": { 00:21:58.307 "rw_ios_per_sec": 0, 00:21:58.307 "rw_mbytes_per_sec": 0, 00:21:58.307 "r_mbytes_per_sec": 0, 00:21:58.307 "w_mbytes_per_sec": 0 00:21:58.307 }, 00:21:58.307 "claimed": false, 00:21:58.307 "zoned": false, 00:21:58.307 "supported_io_types": { 00:21:58.307 "read": true, 00:21:58.307 "write": true, 00:21:58.307 "unmap": true, 00:21:58.307 "flush": true, 00:21:58.307 "reset": true, 00:21:58.307 "nvme_admin": false, 00:21:58.307 "nvme_io": false, 00:21:58.307 "nvme_io_md": false, 00:21:58.307 "write_zeroes": true, 00:21:58.307 "zcopy": true, 00:21:58.307 "get_zone_info": false, 00:21:58.307 "zone_management": false, 00:21:58.307 "zone_append": false, 00:21:58.307 "compare": false, 00:21:58.307 "compare_and_write": false, 00:21:58.307 "abort": true, 00:21:58.307 "seek_hole": false, 00:21:58.307 "seek_data": false, 00:21:58.307 "copy": true, 00:21:58.307 "nvme_iov_md": false 00:21:58.307 }, 00:21:58.307 "memory_domains": [ 00:21:58.307 { 00:21:58.307 "dma_device_id": "system", 00:21:58.307 "dma_device_type": 1 00:21:58.307 }, 00:21:58.307 { 00:21:58.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.307 "dma_device_type": 2 00:21:58.307 } 00:21:58.307 ], 00:21:58.307 "driver_specific": {} 00:21:58.307 } 00:21:58.307 ] 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:21:58.307 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.308 [2024-11-06 09:14:34.817685] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:58.308 [2024-11-06 09:14:34.817739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:58.308 [2024-11-06 09:14:34.817771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:58.308 [2024-11-06 09:14:34.820135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:58.308 [2024-11-06 09:14:34.820211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.308 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.566 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.567 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:58.567 "name": "Existed_Raid", 00:21:58.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.567 "strip_size_kb": 64, 00:21:58.567 "state": "configuring", 00:21:58.567 "raid_level": "raid5f", 00:21:58.567 "superblock": false, 00:21:58.567 "num_base_bdevs": 4, 00:21:58.567 "num_base_bdevs_discovered": 3, 00:21:58.567 "num_base_bdevs_operational": 4, 00:21:58.567 "base_bdevs_list": [ 00:21:58.567 { 00:21:58.567 "name": "BaseBdev1", 00:21:58.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.567 "is_configured": false, 00:21:58.567 "data_offset": 0, 00:21:58.567 "data_size": 0 00:21:58.567 }, 00:21:58.567 { 00:21:58.567 "name": "BaseBdev2", 00:21:58.567 "uuid": "7dffcf55-a035-482b-9549-c2aaca38446b", 00:21:58.567 "is_configured": true, 00:21:58.567 "data_offset": 0, 00:21:58.567 "data_size": 65536 00:21:58.567 }, 00:21:58.567 { 00:21:58.567 "name": "BaseBdev3", 00:21:58.567 "uuid": "8a44fec2-b32a-442b-873b-55257d3144f7", 00:21:58.567 "is_configured": true, 00:21:58.567 "data_offset": 0, 00:21:58.567 "data_size": 65536 00:21:58.567 }, 00:21:58.567 { 00:21:58.567 "name": "BaseBdev4", 00:21:58.567 "uuid": "ff251f06-3f16-43db-a1f7-3c06f0c78312", 00:21:58.567 "is_configured": true, 00:21:58.567 "data_offset": 0, 00:21:58.567 "data_size": 65536 00:21:58.567 } 00:21:58.567 ] 00:21:58.567 }' 00:21:58.567 09:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:58.567 09:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.825 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:21:58.825 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.825 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.083 [2024-11-06 09:14:35.361896] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.083 "name": "Existed_Raid", 00:21:59.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.083 "strip_size_kb": 64, 00:21:59.083 "state": "configuring", 00:21:59.083 "raid_level": "raid5f", 00:21:59.083 "superblock": false, 00:21:59.083 "num_base_bdevs": 4, 00:21:59.083 "num_base_bdevs_discovered": 2, 00:21:59.083 "num_base_bdevs_operational": 4, 00:21:59.083 "base_bdevs_list": [ 00:21:59.083 { 00:21:59.083 "name": "BaseBdev1", 00:21:59.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.083 "is_configured": false, 00:21:59.083 "data_offset": 0, 00:21:59.083 "data_size": 0 00:21:59.083 }, 00:21:59.083 { 00:21:59.083 "name": null, 00:21:59.083 "uuid": "7dffcf55-a035-482b-9549-c2aaca38446b", 00:21:59.083 "is_configured": false, 00:21:59.083 "data_offset": 0, 00:21:59.083 "data_size": 65536 00:21:59.083 }, 00:21:59.083 { 00:21:59.083 "name": "BaseBdev3", 00:21:59.083 "uuid": "8a44fec2-b32a-442b-873b-55257d3144f7", 00:21:59.083 "is_configured": true, 00:21:59.083 "data_offset": 0, 00:21:59.083 "data_size": 65536 00:21:59.083 }, 00:21:59.083 { 00:21:59.083 "name": "BaseBdev4", 00:21:59.083 "uuid": "ff251f06-3f16-43db-a1f7-3c06f0c78312", 00:21:59.083 "is_configured": true, 00:21:59.083 "data_offset": 0, 00:21:59.083 "data_size": 65536 00:21:59.083 } 00:21:59.083 ] 00:21:59.083 }' 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.083 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.652 [2024-11-06 09:14:35.984490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:59.652 BaseBdev1 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.652 09:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.652 [ 00:21:59.652 { 00:21:59.652 "name": "BaseBdev1", 00:21:59.652 "aliases": [ 00:21:59.652 "cd7f3e94-0d71-4063-a3b0-2218ed3b244f" 00:21:59.652 ], 00:21:59.652 "product_name": "Malloc disk", 00:21:59.652 "block_size": 512, 00:21:59.652 "num_blocks": 65536, 00:21:59.652 "uuid": "cd7f3e94-0d71-4063-a3b0-2218ed3b244f", 00:21:59.652 "assigned_rate_limits": { 00:21:59.652 "rw_ios_per_sec": 0, 00:21:59.652 "rw_mbytes_per_sec": 0, 00:21:59.652 "r_mbytes_per_sec": 0, 00:21:59.652 "w_mbytes_per_sec": 0 00:21:59.652 }, 00:21:59.652 "claimed": true, 00:21:59.652 "claim_type": "exclusive_write", 00:21:59.652 "zoned": false, 00:21:59.652 "supported_io_types": { 00:21:59.652 "read": true, 00:21:59.652 "write": true, 00:21:59.652 "unmap": true, 00:21:59.652 "flush": true, 00:21:59.652 "reset": true, 00:21:59.652 "nvme_admin": false, 00:21:59.652 "nvme_io": false, 00:21:59.652 "nvme_io_md": false, 00:21:59.652 "write_zeroes": true, 00:21:59.652 "zcopy": true, 00:21:59.652 "get_zone_info": false, 00:21:59.652 "zone_management": false, 00:21:59.652 "zone_append": false, 00:21:59.652 "compare": false, 00:21:59.652 "compare_and_write": false, 00:21:59.652 "abort": true, 00:21:59.652 "seek_hole": false, 00:21:59.652 "seek_data": false, 00:21:59.652 "copy": true, 00:21:59.652 "nvme_iov_md": false 00:21:59.652 }, 00:21:59.652 "memory_domains": [ 00:21:59.652 { 00:21:59.652 "dma_device_id": "system", 00:21:59.652 "dma_device_type": 1 00:21:59.652 }, 00:21:59.652 { 00:21:59.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.652 "dma_device_type": 2 00:21:59.652 } 00:21:59.652 ], 00:21:59.652 "driver_specific": {} 00:21:59.652 } 00:21:59.652 ] 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:59.652 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.652 "name": "Existed_Raid", 00:21:59.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.652 "strip_size_kb": 64, 00:21:59.652 "state": "configuring", 00:21:59.652 "raid_level": "raid5f", 00:21:59.652 "superblock": false, 00:21:59.652 "num_base_bdevs": 4, 00:21:59.652 "num_base_bdevs_discovered": 3, 00:21:59.652 "num_base_bdevs_operational": 4, 00:21:59.652 "base_bdevs_list": [ 00:21:59.652 { 00:21:59.653 "name": "BaseBdev1", 00:21:59.653 "uuid": "cd7f3e94-0d71-4063-a3b0-2218ed3b244f", 00:21:59.653 "is_configured": true, 00:21:59.653 "data_offset": 0, 00:21:59.653 "data_size": 65536 00:21:59.653 }, 00:21:59.653 { 00:21:59.653 "name": null, 00:21:59.653 "uuid": "7dffcf55-a035-482b-9549-c2aaca38446b", 00:21:59.653 "is_configured": false, 00:21:59.653 "data_offset": 0, 00:21:59.653 "data_size": 65536 00:21:59.653 }, 00:21:59.653 { 00:21:59.653 "name": "BaseBdev3", 00:21:59.653 "uuid": "8a44fec2-b32a-442b-873b-55257d3144f7", 00:21:59.653 "is_configured": true, 00:21:59.653 "data_offset": 0, 00:21:59.653 "data_size": 65536 00:21:59.653 }, 00:21:59.653 { 00:21:59.653 "name": "BaseBdev4", 00:21:59.653 "uuid": "ff251f06-3f16-43db-a1f7-3c06f0c78312", 00:21:59.653 "is_configured": true, 00:21:59.653 "data_offset": 0, 00:21:59.653 "data_size": 65536 00:21:59.653 } 00:21:59.653 ] 00:21:59.653 }' 00:21:59.653 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.653 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.219 [2024-11-06 09:14:36.568697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.219 "name": "Existed_Raid", 00:22:00.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.219 "strip_size_kb": 64, 00:22:00.219 "state": "configuring", 00:22:00.219 "raid_level": "raid5f", 00:22:00.219 "superblock": false, 00:22:00.219 "num_base_bdevs": 4, 00:22:00.219 "num_base_bdevs_discovered": 2, 00:22:00.219 "num_base_bdevs_operational": 4, 00:22:00.219 "base_bdevs_list": [ 00:22:00.219 { 00:22:00.219 "name": "BaseBdev1", 00:22:00.219 "uuid": "cd7f3e94-0d71-4063-a3b0-2218ed3b244f", 00:22:00.219 "is_configured": true, 00:22:00.219 "data_offset": 0, 00:22:00.219 "data_size": 65536 00:22:00.219 }, 00:22:00.219 { 00:22:00.219 "name": null, 00:22:00.219 "uuid": "7dffcf55-a035-482b-9549-c2aaca38446b", 00:22:00.219 "is_configured": false, 00:22:00.219 "data_offset": 0, 00:22:00.219 "data_size": 65536 00:22:00.219 }, 00:22:00.219 { 00:22:00.219 "name": null, 00:22:00.219 "uuid": "8a44fec2-b32a-442b-873b-55257d3144f7", 00:22:00.219 "is_configured": false, 00:22:00.219 "data_offset": 0, 00:22:00.219 "data_size": 65536 00:22:00.219 }, 00:22:00.219 { 00:22:00.219 "name": "BaseBdev4", 00:22:00.219 "uuid": "ff251f06-3f16-43db-a1f7-3c06f0c78312", 00:22:00.219 "is_configured": true, 00:22:00.219 "data_offset": 0, 00:22:00.219 "data_size": 65536 00:22:00.219 } 00:22:00.219 ] 00:22:00.219 }' 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.219 09:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.786 [2024-11-06 09:14:37.128849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:00.786 "name": "Existed_Raid", 00:22:00.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:00.786 "strip_size_kb": 64, 00:22:00.786 "state": "configuring", 00:22:00.786 "raid_level": "raid5f", 00:22:00.786 "superblock": false, 00:22:00.786 "num_base_bdevs": 4, 00:22:00.786 "num_base_bdevs_discovered": 3, 00:22:00.786 "num_base_bdevs_operational": 4, 00:22:00.786 "base_bdevs_list": [ 00:22:00.786 { 00:22:00.786 "name": "BaseBdev1", 00:22:00.786 "uuid": "cd7f3e94-0d71-4063-a3b0-2218ed3b244f", 00:22:00.786 "is_configured": true, 00:22:00.786 "data_offset": 0, 00:22:00.786 "data_size": 65536 00:22:00.786 }, 00:22:00.786 { 00:22:00.786 "name": null, 00:22:00.786 "uuid": "7dffcf55-a035-482b-9549-c2aaca38446b", 00:22:00.786 "is_configured": false, 00:22:00.786 "data_offset": 0, 00:22:00.786 "data_size": 65536 00:22:00.786 }, 00:22:00.786 { 00:22:00.786 "name": "BaseBdev3", 00:22:00.786 "uuid": "8a44fec2-b32a-442b-873b-55257d3144f7", 00:22:00.786 "is_configured": true, 00:22:00.786 "data_offset": 0, 00:22:00.786 "data_size": 65536 00:22:00.786 }, 00:22:00.786 { 00:22:00.786 "name": "BaseBdev4", 00:22:00.786 "uuid": "ff251f06-3f16-43db-a1f7-3c06f0c78312", 00:22:00.786 "is_configured": true, 00:22:00.786 "data_offset": 0, 00:22:00.786 "data_size": 65536 00:22:00.786 } 00:22:00.786 ] 00:22:00.786 }' 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:00.786 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.398 [2024-11-06 09:14:37.677014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.398 "name": "Existed_Raid", 00:22:01.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.398 "strip_size_kb": 64, 00:22:01.398 "state": "configuring", 00:22:01.398 "raid_level": "raid5f", 00:22:01.398 "superblock": false, 00:22:01.398 "num_base_bdevs": 4, 00:22:01.398 "num_base_bdevs_discovered": 2, 00:22:01.398 "num_base_bdevs_operational": 4, 00:22:01.398 "base_bdevs_list": [ 00:22:01.398 { 00:22:01.398 "name": null, 00:22:01.398 "uuid": "cd7f3e94-0d71-4063-a3b0-2218ed3b244f", 00:22:01.398 "is_configured": false, 00:22:01.398 "data_offset": 0, 00:22:01.398 "data_size": 65536 00:22:01.398 }, 00:22:01.398 { 00:22:01.398 "name": null, 00:22:01.398 "uuid": "7dffcf55-a035-482b-9549-c2aaca38446b", 00:22:01.398 "is_configured": false, 00:22:01.398 "data_offset": 0, 00:22:01.398 "data_size": 65536 00:22:01.398 }, 00:22:01.398 { 00:22:01.398 "name": "BaseBdev3", 00:22:01.398 "uuid": "8a44fec2-b32a-442b-873b-55257d3144f7", 00:22:01.398 "is_configured": true, 00:22:01.398 "data_offset": 0, 00:22:01.398 "data_size": 65536 00:22:01.398 }, 00:22:01.398 { 00:22:01.398 "name": "BaseBdev4", 00:22:01.398 "uuid": "ff251f06-3f16-43db-a1f7-3c06f0c78312", 00:22:01.398 "is_configured": true, 00:22:01.398 "data_offset": 0, 00:22:01.398 "data_size": 65536 00:22:01.398 } 00:22:01.398 ] 00:22:01.398 }' 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.398 09:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.964 [2024-11-06 09:14:38.324022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.964 "name": "Existed_Raid", 00:22:01.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.964 "strip_size_kb": 64, 00:22:01.964 "state": "configuring", 00:22:01.964 "raid_level": "raid5f", 00:22:01.964 "superblock": false, 00:22:01.964 "num_base_bdevs": 4, 00:22:01.964 "num_base_bdevs_discovered": 3, 00:22:01.964 "num_base_bdevs_operational": 4, 00:22:01.964 "base_bdevs_list": [ 00:22:01.964 { 00:22:01.964 "name": null, 00:22:01.964 "uuid": "cd7f3e94-0d71-4063-a3b0-2218ed3b244f", 00:22:01.964 "is_configured": false, 00:22:01.964 "data_offset": 0, 00:22:01.964 "data_size": 65536 00:22:01.964 }, 00:22:01.964 { 00:22:01.964 "name": "BaseBdev2", 00:22:01.964 "uuid": "7dffcf55-a035-482b-9549-c2aaca38446b", 00:22:01.964 "is_configured": true, 00:22:01.964 "data_offset": 0, 00:22:01.964 "data_size": 65536 00:22:01.964 }, 00:22:01.964 { 00:22:01.964 "name": "BaseBdev3", 00:22:01.964 "uuid": "8a44fec2-b32a-442b-873b-55257d3144f7", 00:22:01.964 "is_configured": true, 00:22:01.964 "data_offset": 0, 00:22:01.964 "data_size": 65536 00:22:01.964 }, 00:22:01.964 { 00:22:01.964 "name": "BaseBdev4", 00:22:01.964 "uuid": "ff251f06-3f16-43db-a1f7-3c06f0c78312", 00:22:01.964 "is_configured": true, 00:22:01.964 "data_offset": 0, 00:22:01.964 "data_size": 65536 00:22:01.964 } 00:22:01.964 ] 00:22:01.964 }' 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.964 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cd7f3e94-0d71-4063-a3b0-2218ed3b244f 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.531 [2024-11-06 09:14:38.985431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:02.531 [2024-11-06 09:14:38.985500] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:02.531 [2024-11-06 09:14:38.985513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:02.531 [2024-11-06 09:14:38.985849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:02.531 [2024-11-06 09:14:38.992250] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:02.531 [2024-11-06 09:14:38.992286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:02.531 [2024-11-06 09:14:38.992572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.531 NewBaseBdev 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.531 09:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.531 [ 00:22:02.531 { 00:22:02.531 "name": "NewBaseBdev", 00:22:02.531 "aliases": [ 00:22:02.531 "cd7f3e94-0d71-4063-a3b0-2218ed3b244f" 00:22:02.531 ], 00:22:02.531 "product_name": "Malloc disk", 00:22:02.531 "block_size": 512, 00:22:02.531 "num_blocks": 65536, 00:22:02.531 "uuid": "cd7f3e94-0d71-4063-a3b0-2218ed3b244f", 00:22:02.531 "assigned_rate_limits": { 00:22:02.531 "rw_ios_per_sec": 0, 00:22:02.531 "rw_mbytes_per_sec": 0, 00:22:02.531 "r_mbytes_per_sec": 0, 00:22:02.531 "w_mbytes_per_sec": 0 00:22:02.531 }, 00:22:02.531 "claimed": true, 00:22:02.531 "claim_type": "exclusive_write", 00:22:02.531 "zoned": false, 00:22:02.531 "supported_io_types": { 00:22:02.531 "read": true, 00:22:02.531 "write": true, 00:22:02.531 "unmap": true, 00:22:02.531 "flush": true, 00:22:02.531 "reset": true, 00:22:02.531 "nvme_admin": false, 00:22:02.531 "nvme_io": false, 00:22:02.531 "nvme_io_md": false, 00:22:02.531 "write_zeroes": true, 00:22:02.531 "zcopy": true, 00:22:02.531 "get_zone_info": false, 00:22:02.531 "zone_management": false, 00:22:02.531 "zone_append": false, 00:22:02.531 "compare": false, 00:22:02.531 "compare_and_write": false, 00:22:02.531 "abort": true, 00:22:02.531 "seek_hole": false, 00:22:02.531 "seek_data": false, 00:22:02.531 "copy": true, 00:22:02.531 "nvme_iov_md": false 00:22:02.531 }, 00:22:02.531 "memory_domains": [ 00:22:02.531 { 00:22:02.531 "dma_device_id": "system", 00:22:02.531 "dma_device_type": 1 00:22:02.531 }, 00:22:02.531 { 00:22:02.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:02.531 "dma_device_type": 2 00:22:02.531 } 00:22:02.531 ], 00:22:02.531 "driver_specific": {} 00:22:02.531 } 00:22:02.531 ] 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.531 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:02.789 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:02.789 "name": "Existed_Raid", 00:22:02.789 "uuid": "d4e767c3-0e7d-4e28-8d57-0aad6c43cf77", 00:22:02.789 "strip_size_kb": 64, 00:22:02.789 "state": "online", 00:22:02.789 "raid_level": "raid5f", 00:22:02.789 "superblock": false, 00:22:02.789 "num_base_bdevs": 4, 00:22:02.789 "num_base_bdevs_discovered": 4, 00:22:02.789 "num_base_bdevs_operational": 4, 00:22:02.789 "base_bdevs_list": [ 00:22:02.789 { 00:22:02.789 "name": "NewBaseBdev", 00:22:02.789 "uuid": "cd7f3e94-0d71-4063-a3b0-2218ed3b244f", 00:22:02.789 "is_configured": true, 00:22:02.789 "data_offset": 0, 00:22:02.789 "data_size": 65536 00:22:02.789 }, 00:22:02.789 { 00:22:02.789 "name": "BaseBdev2", 00:22:02.789 "uuid": "7dffcf55-a035-482b-9549-c2aaca38446b", 00:22:02.789 "is_configured": true, 00:22:02.789 "data_offset": 0, 00:22:02.789 "data_size": 65536 00:22:02.789 }, 00:22:02.789 { 00:22:02.789 "name": "BaseBdev3", 00:22:02.789 "uuid": "8a44fec2-b32a-442b-873b-55257d3144f7", 00:22:02.789 "is_configured": true, 00:22:02.789 "data_offset": 0, 00:22:02.789 "data_size": 65536 00:22:02.789 }, 00:22:02.789 { 00:22:02.789 "name": "BaseBdev4", 00:22:02.789 "uuid": "ff251f06-3f16-43db-a1f7-3c06f0c78312", 00:22:02.789 "is_configured": true, 00:22:02.789 "data_offset": 0, 00:22:02.790 "data_size": 65536 00:22:02.790 } 00:22:02.790 ] 00:22:02.790 }' 00:22:02.790 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:02.790 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.048 [2024-11-06 09:14:39.540165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:03.048 "name": "Existed_Raid", 00:22:03.048 "aliases": [ 00:22:03.048 "d4e767c3-0e7d-4e28-8d57-0aad6c43cf77" 00:22:03.048 ], 00:22:03.048 "product_name": "Raid Volume", 00:22:03.048 "block_size": 512, 00:22:03.048 "num_blocks": 196608, 00:22:03.048 "uuid": "d4e767c3-0e7d-4e28-8d57-0aad6c43cf77", 00:22:03.048 "assigned_rate_limits": { 00:22:03.048 "rw_ios_per_sec": 0, 00:22:03.048 "rw_mbytes_per_sec": 0, 00:22:03.048 "r_mbytes_per_sec": 0, 00:22:03.048 "w_mbytes_per_sec": 0 00:22:03.048 }, 00:22:03.048 "claimed": false, 00:22:03.048 "zoned": false, 00:22:03.048 "supported_io_types": { 00:22:03.048 "read": true, 00:22:03.048 "write": true, 00:22:03.048 "unmap": false, 00:22:03.048 "flush": false, 00:22:03.048 "reset": true, 00:22:03.048 "nvme_admin": false, 00:22:03.048 "nvme_io": false, 00:22:03.048 "nvme_io_md": false, 00:22:03.048 "write_zeroes": true, 00:22:03.048 "zcopy": false, 00:22:03.048 "get_zone_info": false, 00:22:03.048 "zone_management": false, 00:22:03.048 "zone_append": false, 00:22:03.048 "compare": false, 00:22:03.048 "compare_and_write": false, 00:22:03.048 "abort": false, 00:22:03.048 "seek_hole": false, 00:22:03.048 "seek_data": false, 00:22:03.048 "copy": false, 00:22:03.048 "nvme_iov_md": false 00:22:03.048 }, 00:22:03.048 "driver_specific": { 00:22:03.048 "raid": { 00:22:03.048 "uuid": "d4e767c3-0e7d-4e28-8d57-0aad6c43cf77", 00:22:03.048 "strip_size_kb": 64, 00:22:03.048 "state": "online", 00:22:03.048 "raid_level": "raid5f", 00:22:03.048 "superblock": false, 00:22:03.048 "num_base_bdevs": 4, 00:22:03.048 "num_base_bdevs_discovered": 4, 00:22:03.048 "num_base_bdevs_operational": 4, 00:22:03.048 "base_bdevs_list": [ 00:22:03.048 { 00:22:03.048 "name": "NewBaseBdev", 00:22:03.048 "uuid": "cd7f3e94-0d71-4063-a3b0-2218ed3b244f", 00:22:03.048 "is_configured": true, 00:22:03.048 "data_offset": 0, 00:22:03.048 "data_size": 65536 00:22:03.048 }, 00:22:03.048 { 00:22:03.048 "name": "BaseBdev2", 00:22:03.048 "uuid": "7dffcf55-a035-482b-9549-c2aaca38446b", 00:22:03.048 "is_configured": true, 00:22:03.048 "data_offset": 0, 00:22:03.048 "data_size": 65536 00:22:03.048 }, 00:22:03.048 { 00:22:03.048 "name": "BaseBdev3", 00:22:03.048 "uuid": "8a44fec2-b32a-442b-873b-55257d3144f7", 00:22:03.048 "is_configured": true, 00:22:03.048 "data_offset": 0, 00:22:03.048 "data_size": 65536 00:22:03.048 }, 00:22:03.048 { 00:22:03.048 "name": "BaseBdev4", 00:22:03.048 "uuid": "ff251f06-3f16-43db-a1f7-3c06f0c78312", 00:22:03.048 "is_configured": true, 00:22:03.048 "data_offset": 0, 00:22:03.048 "data_size": 65536 00:22:03.048 } 00:22:03.048 ] 00:22:03.048 } 00:22:03.048 } 00:22:03.048 }' 00:22:03.048 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:03.321 BaseBdev2 00:22:03.321 BaseBdev3 00:22:03.321 BaseBdev4' 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.321 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.580 [2024-11-06 09:14:39.907950] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:03.580 [2024-11-06 09:14:39.907992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:03.580 [2024-11-06 09:14:39.908086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:03.580 [2024-11-06 09:14:39.908448] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:03.580 [2024-11-06 09:14:39.908475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83065 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 83065 ']' 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 83065 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83065 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:03.580 killing process with pid 83065 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83065' 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 83065 00:22:03.580 [2024-11-06 09:14:39.946988] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:03.580 09:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 83065 00:22:03.838 [2024-11-06 09:14:40.304986] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:22:05.214 00:22:05.214 real 0m12.797s 00:22:05.214 user 0m21.369s 00:22:05.214 sys 0m1.732s 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.214 ************************************ 00:22:05.214 END TEST raid5f_state_function_test 00:22:05.214 ************************************ 00:22:05.214 09:14:41 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:22:05.214 09:14:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:05.214 09:14:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:05.214 09:14:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:05.214 ************************************ 00:22:05.214 START TEST raid5f_state_function_test_sb 00:22:05.214 ************************************ 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83748 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83748' 00:22:05.214 Process raid pid: 83748 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83748 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 83748 ']' 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:05.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:05.214 09:14:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.214 [2024-11-06 09:14:41.493733] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:22:05.214 [2024-11-06 09:14:41.493907] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.214 [2024-11-06 09:14:41.664555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.472 [2024-11-06 09:14:41.830408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.731 [2024-11-06 09:14:42.037400] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:05.731 [2024-11-06 09:14:42.037451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.989 [2024-11-06 09:14:42.490551] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:05.989 [2024-11-06 09:14:42.490624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:05.989 [2024-11-06 09:14:42.490642] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:05.989 [2024-11-06 09:14:42.490659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:05.989 [2024-11-06 09:14:42.490669] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:05.989 [2024-11-06 09:14:42.490682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:05.989 [2024-11-06 09:14:42.490692] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:05.989 [2024-11-06 09:14:42.490706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:05.989 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.990 09:14:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.248 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.248 "name": "Existed_Raid", 00:22:06.248 "uuid": "27719f18-3523-4d3e-b8b4-2e8ed0be21b0", 00:22:06.248 "strip_size_kb": 64, 00:22:06.248 "state": "configuring", 00:22:06.248 "raid_level": "raid5f", 00:22:06.248 "superblock": true, 00:22:06.248 "num_base_bdevs": 4, 00:22:06.248 "num_base_bdevs_discovered": 0, 00:22:06.248 "num_base_bdevs_operational": 4, 00:22:06.248 "base_bdevs_list": [ 00:22:06.248 { 00:22:06.248 "name": "BaseBdev1", 00:22:06.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.248 "is_configured": false, 00:22:06.248 "data_offset": 0, 00:22:06.248 "data_size": 0 00:22:06.248 }, 00:22:06.248 { 00:22:06.248 "name": "BaseBdev2", 00:22:06.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.248 "is_configured": false, 00:22:06.248 "data_offset": 0, 00:22:06.248 "data_size": 0 00:22:06.248 }, 00:22:06.248 { 00:22:06.248 "name": "BaseBdev3", 00:22:06.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.248 "is_configured": false, 00:22:06.248 "data_offset": 0, 00:22:06.248 "data_size": 0 00:22:06.248 }, 00:22:06.248 { 00:22:06.248 "name": "BaseBdev4", 00:22:06.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.248 "is_configured": false, 00:22:06.248 "data_offset": 0, 00:22:06.248 "data_size": 0 00:22:06.248 } 00:22:06.248 ] 00:22:06.248 }' 00:22:06.248 09:14:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.248 09:14:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.507 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:06.507 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.507 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.507 [2024-11-06 09:14:43.018861] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:06.507 [2024-11-06 09:14:43.018928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:06.507 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.507 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:06.507 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.507 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.507 [2024-11-06 09:14:43.026687] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:06.507 [2024-11-06 09:14:43.026762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:06.507 [2024-11-06 09:14:43.026793] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:06.507 [2024-11-06 09:14:43.026868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:06.507 [2024-11-06 09:14:43.026896] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:06.507 [2024-11-06 09:14:43.026930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:06.507 [2024-11-06 09:14:43.026952] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:06.507 [2024-11-06 09:14:43.026982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:06.507 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.507 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:06.507 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.507 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.765 [2024-11-06 09:14:43.078299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:06.765 BaseBdev1 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.765 [ 00:22:06.765 { 00:22:06.765 "name": "BaseBdev1", 00:22:06.765 "aliases": [ 00:22:06.765 "dfebe39d-da12-4f85-a701-6b67a1c394ca" 00:22:06.765 ], 00:22:06.765 "product_name": "Malloc disk", 00:22:06.765 "block_size": 512, 00:22:06.765 "num_blocks": 65536, 00:22:06.765 "uuid": "dfebe39d-da12-4f85-a701-6b67a1c394ca", 00:22:06.765 "assigned_rate_limits": { 00:22:06.765 "rw_ios_per_sec": 0, 00:22:06.765 "rw_mbytes_per_sec": 0, 00:22:06.765 "r_mbytes_per_sec": 0, 00:22:06.765 "w_mbytes_per_sec": 0 00:22:06.765 }, 00:22:06.765 "claimed": true, 00:22:06.765 "claim_type": "exclusive_write", 00:22:06.765 "zoned": false, 00:22:06.765 "supported_io_types": { 00:22:06.765 "read": true, 00:22:06.765 "write": true, 00:22:06.765 "unmap": true, 00:22:06.765 "flush": true, 00:22:06.765 "reset": true, 00:22:06.765 "nvme_admin": false, 00:22:06.765 "nvme_io": false, 00:22:06.765 "nvme_io_md": false, 00:22:06.765 "write_zeroes": true, 00:22:06.765 "zcopy": true, 00:22:06.765 "get_zone_info": false, 00:22:06.765 "zone_management": false, 00:22:06.765 "zone_append": false, 00:22:06.765 "compare": false, 00:22:06.765 "compare_and_write": false, 00:22:06.765 "abort": true, 00:22:06.765 "seek_hole": false, 00:22:06.765 "seek_data": false, 00:22:06.765 "copy": true, 00:22:06.765 "nvme_iov_md": false 00:22:06.765 }, 00:22:06.765 "memory_domains": [ 00:22:06.765 { 00:22:06.765 "dma_device_id": "system", 00:22:06.765 "dma_device_type": 1 00:22:06.765 }, 00:22:06.765 { 00:22:06.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.765 "dma_device_type": 2 00:22:06.765 } 00:22:06.765 ], 00:22:06.765 "driver_specific": {} 00:22:06.765 } 00:22:06.765 ] 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:06.765 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:06.766 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:06.766 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:06.766 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.766 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.766 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.766 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.766 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:06.766 "name": "Existed_Raid", 00:22:06.766 "uuid": "1d579fb1-e3bc-43c7-8578-f7326af2a9f0", 00:22:06.766 "strip_size_kb": 64, 00:22:06.766 "state": "configuring", 00:22:06.766 "raid_level": "raid5f", 00:22:06.766 "superblock": true, 00:22:06.766 "num_base_bdevs": 4, 00:22:06.766 "num_base_bdevs_discovered": 1, 00:22:06.766 "num_base_bdevs_operational": 4, 00:22:06.766 "base_bdevs_list": [ 00:22:06.766 { 00:22:06.766 "name": "BaseBdev1", 00:22:06.766 "uuid": "dfebe39d-da12-4f85-a701-6b67a1c394ca", 00:22:06.766 "is_configured": true, 00:22:06.766 "data_offset": 2048, 00:22:06.766 "data_size": 63488 00:22:06.766 }, 00:22:06.766 { 00:22:06.766 "name": "BaseBdev2", 00:22:06.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.766 "is_configured": false, 00:22:06.766 "data_offset": 0, 00:22:06.766 "data_size": 0 00:22:06.766 }, 00:22:06.766 { 00:22:06.766 "name": "BaseBdev3", 00:22:06.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.766 "is_configured": false, 00:22:06.766 "data_offset": 0, 00:22:06.766 "data_size": 0 00:22:06.766 }, 00:22:06.766 { 00:22:06.766 "name": "BaseBdev4", 00:22:06.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.766 "is_configured": false, 00:22:06.766 "data_offset": 0, 00:22:06.766 "data_size": 0 00:22:06.766 } 00:22:06.766 ] 00:22:06.766 }' 00:22:06.766 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:06.766 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.340 [2024-11-06 09:14:43.630521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:07.340 [2024-11-06 09:14:43.630611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.340 [2024-11-06 09:14:43.638586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:07.340 [2024-11-06 09:14:43.641006] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:07.340 [2024-11-06 09:14:43.641073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:07.340 [2024-11-06 09:14:43.641100] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:07.340 [2024-11-06 09:14:43.641131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:07.340 [2024-11-06 09:14:43.641151] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:07.340 [2024-11-06 09:14:43.641178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.340 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.340 "name": "Existed_Raid", 00:22:07.340 "uuid": "94f0509c-7831-4659-b18a-7e6b1c03415f", 00:22:07.340 "strip_size_kb": 64, 00:22:07.340 "state": "configuring", 00:22:07.340 "raid_level": "raid5f", 00:22:07.340 "superblock": true, 00:22:07.340 "num_base_bdevs": 4, 00:22:07.340 "num_base_bdevs_discovered": 1, 00:22:07.340 "num_base_bdevs_operational": 4, 00:22:07.340 "base_bdevs_list": [ 00:22:07.340 { 00:22:07.340 "name": "BaseBdev1", 00:22:07.340 "uuid": "dfebe39d-da12-4f85-a701-6b67a1c394ca", 00:22:07.340 "is_configured": true, 00:22:07.340 "data_offset": 2048, 00:22:07.340 "data_size": 63488 00:22:07.340 }, 00:22:07.340 { 00:22:07.340 "name": "BaseBdev2", 00:22:07.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.340 "is_configured": false, 00:22:07.340 "data_offset": 0, 00:22:07.340 "data_size": 0 00:22:07.340 }, 00:22:07.340 { 00:22:07.340 "name": "BaseBdev3", 00:22:07.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.340 "is_configured": false, 00:22:07.340 "data_offset": 0, 00:22:07.340 "data_size": 0 00:22:07.341 }, 00:22:07.341 { 00:22:07.341 "name": "BaseBdev4", 00:22:07.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.341 "is_configured": false, 00:22:07.341 "data_offset": 0, 00:22:07.341 "data_size": 0 00:22:07.341 } 00:22:07.341 ] 00:22:07.341 }' 00:22:07.341 09:14:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.341 09:14:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.906 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:07.906 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.906 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.906 [2024-11-06 09:14:44.192805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:07.906 BaseBdev2 00:22:07.906 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.906 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:07.906 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:07.906 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:07.906 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:07.906 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.907 [ 00:22:07.907 { 00:22:07.907 "name": "BaseBdev2", 00:22:07.907 "aliases": [ 00:22:07.907 "5da0211d-27dd-43ee-b30a-6e70d820401d" 00:22:07.907 ], 00:22:07.907 "product_name": "Malloc disk", 00:22:07.907 "block_size": 512, 00:22:07.907 "num_blocks": 65536, 00:22:07.907 "uuid": "5da0211d-27dd-43ee-b30a-6e70d820401d", 00:22:07.907 "assigned_rate_limits": { 00:22:07.907 "rw_ios_per_sec": 0, 00:22:07.907 "rw_mbytes_per_sec": 0, 00:22:07.907 "r_mbytes_per_sec": 0, 00:22:07.907 "w_mbytes_per_sec": 0 00:22:07.907 }, 00:22:07.907 "claimed": true, 00:22:07.907 "claim_type": "exclusive_write", 00:22:07.907 "zoned": false, 00:22:07.907 "supported_io_types": { 00:22:07.907 "read": true, 00:22:07.907 "write": true, 00:22:07.907 "unmap": true, 00:22:07.907 "flush": true, 00:22:07.907 "reset": true, 00:22:07.907 "nvme_admin": false, 00:22:07.907 "nvme_io": false, 00:22:07.907 "nvme_io_md": false, 00:22:07.907 "write_zeroes": true, 00:22:07.907 "zcopy": true, 00:22:07.907 "get_zone_info": false, 00:22:07.907 "zone_management": false, 00:22:07.907 "zone_append": false, 00:22:07.907 "compare": false, 00:22:07.907 "compare_and_write": false, 00:22:07.907 "abort": true, 00:22:07.907 "seek_hole": false, 00:22:07.907 "seek_data": false, 00:22:07.907 "copy": true, 00:22:07.907 "nvme_iov_md": false 00:22:07.907 }, 00:22:07.907 "memory_domains": [ 00:22:07.907 { 00:22:07.907 "dma_device_id": "system", 00:22:07.907 "dma_device_type": 1 00:22:07.907 }, 00:22:07.907 { 00:22:07.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.907 "dma_device_type": 2 00:22:07.907 } 00:22:07.907 ], 00:22:07.907 "driver_specific": {} 00:22:07.907 } 00:22:07.907 ] 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.907 "name": "Existed_Raid", 00:22:07.907 "uuid": "94f0509c-7831-4659-b18a-7e6b1c03415f", 00:22:07.907 "strip_size_kb": 64, 00:22:07.907 "state": "configuring", 00:22:07.907 "raid_level": "raid5f", 00:22:07.907 "superblock": true, 00:22:07.907 "num_base_bdevs": 4, 00:22:07.907 "num_base_bdevs_discovered": 2, 00:22:07.907 "num_base_bdevs_operational": 4, 00:22:07.907 "base_bdevs_list": [ 00:22:07.907 { 00:22:07.907 "name": "BaseBdev1", 00:22:07.907 "uuid": "dfebe39d-da12-4f85-a701-6b67a1c394ca", 00:22:07.907 "is_configured": true, 00:22:07.907 "data_offset": 2048, 00:22:07.907 "data_size": 63488 00:22:07.907 }, 00:22:07.907 { 00:22:07.907 "name": "BaseBdev2", 00:22:07.907 "uuid": "5da0211d-27dd-43ee-b30a-6e70d820401d", 00:22:07.907 "is_configured": true, 00:22:07.907 "data_offset": 2048, 00:22:07.907 "data_size": 63488 00:22:07.907 }, 00:22:07.907 { 00:22:07.907 "name": "BaseBdev3", 00:22:07.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.907 "is_configured": false, 00:22:07.907 "data_offset": 0, 00:22:07.907 "data_size": 0 00:22:07.907 }, 00:22:07.907 { 00:22:07.907 "name": "BaseBdev4", 00:22:07.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.907 "is_configured": false, 00:22:07.907 "data_offset": 0, 00:22:07.907 "data_size": 0 00:22:07.907 } 00:22:07.907 ] 00:22:07.907 }' 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.907 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.475 [2024-11-06 09:14:44.770747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:08.475 BaseBdev3 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.475 [ 00:22:08.475 { 00:22:08.475 "name": "BaseBdev3", 00:22:08.475 "aliases": [ 00:22:08.475 "09d50379-dd70-4e82-9999-2e29c0a9c26a" 00:22:08.475 ], 00:22:08.475 "product_name": "Malloc disk", 00:22:08.475 "block_size": 512, 00:22:08.475 "num_blocks": 65536, 00:22:08.475 "uuid": "09d50379-dd70-4e82-9999-2e29c0a9c26a", 00:22:08.475 "assigned_rate_limits": { 00:22:08.475 "rw_ios_per_sec": 0, 00:22:08.475 "rw_mbytes_per_sec": 0, 00:22:08.475 "r_mbytes_per_sec": 0, 00:22:08.475 "w_mbytes_per_sec": 0 00:22:08.475 }, 00:22:08.475 "claimed": true, 00:22:08.475 "claim_type": "exclusive_write", 00:22:08.475 "zoned": false, 00:22:08.475 "supported_io_types": { 00:22:08.475 "read": true, 00:22:08.475 "write": true, 00:22:08.475 "unmap": true, 00:22:08.475 "flush": true, 00:22:08.475 "reset": true, 00:22:08.475 "nvme_admin": false, 00:22:08.475 "nvme_io": false, 00:22:08.475 "nvme_io_md": false, 00:22:08.475 "write_zeroes": true, 00:22:08.475 "zcopy": true, 00:22:08.475 "get_zone_info": false, 00:22:08.475 "zone_management": false, 00:22:08.475 "zone_append": false, 00:22:08.475 "compare": false, 00:22:08.475 "compare_and_write": false, 00:22:08.475 "abort": true, 00:22:08.475 "seek_hole": false, 00:22:08.475 "seek_data": false, 00:22:08.475 "copy": true, 00:22:08.475 "nvme_iov_md": false 00:22:08.475 }, 00:22:08.475 "memory_domains": [ 00:22:08.475 { 00:22:08.475 "dma_device_id": "system", 00:22:08.475 "dma_device_type": 1 00:22:08.475 }, 00:22:08.475 { 00:22:08.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.475 "dma_device_type": 2 00:22:08.475 } 00:22:08.475 ], 00:22:08.475 "driver_specific": {} 00:22:08.475 } 00:22:08.475 ] 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:08.475 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.476 "name": "Existed_Raid", 00:22:08.476 "uuid": "94f0509c-7831-4659-b18a-7e6b1c03415f", 00:22:08.476 "strip_size_kb": 64, 00:22:08.476 "state": "configuring", 00:22:08.476 "raid_level": "raid5f", 00:22:08.476 "superblock": true, 00:22:08.476 "num_base_bdevs": 4, 00:22:08.476 "num_base_bdevs_discovered": 3, 00:22:08.476 "num_base_bdevs_operational": 4, 00:22:08.476 "base_bdevs_list": [ 00:22:08.476 { 00:22:08.476 "name": "BaseBdev1", 00:22:08.476 "uuid": "dfebe39d-da12-4f85-a701-6b67a1c394ca", 00:22:08.476 "is_configured": true, 00:22:08.476 "data_offset": 2048, 00:22:08.476 "data_size": 63488 00:22:08.476 }, 00:22:08.476 { 00:22:08.476 "name": "BaseBdev2", 00:22:08.476 "uuid": "5da0211d-27dd-43ee-b30a-6e70d820401d", 00:22:08.476 "is_configured": true, 00:22:08.476 "data_offset": 2048, 00:22:08.476 "data_size": 63488 00:22:08.476 }, 00:22:08.476 { 00:22:08.476 "name": "BaseBdev3", 00:22:08.476 "uuid": "09d50379-dd70-4e82-9999-2e29c0a9c26a", 00:22:08.476 "is_configured": true, 00:22:08.476 "data_offset": 2048, 00:22:08.476 "data_size": 63488 00:22:08.476 }, 00:22:08.476 { 00:22:08.476 "name": "BaseBdev4", 00:22:08.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.476 "is_configured": false, 00:22:08.476 "data_offset": 0, 00:22:08.476 "data_size": 0 00:22:08.476 } 00:22:08.476 ] 00:22:08.476 }' 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.476 09:14:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.042 [2024-11-06 09:14:45.308761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:09.042 BaseBdev4 00:22:09.042 [2024-11-06 09:14:45.309275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:09.042 [2024-11-06 09:14:45.309300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:09.042 [2024-11-06 09:14:45.309622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.042 [2024-11-06 09:14:45.316584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:09.042 [2024-11-06 09:14:45.316729] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:09.042 [2024-11-06 09:14:45.317192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.042 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.042 [ 00:22:09.042 { 00:22:09.042 "name": "BaseBdev4", 00:22:09.042 "aliases": [ 00:22:09.042 "74155b36-b320-45cc-82b7-4f462b52b2ab" 00:22:09.042 ], 00:22:09.042 "product_name": "Malloc disk", 00:22:09.042 "block_size": 512, 00:22:09.042 "num_blocks": 65536, 00:22:09.042 "uuid": "74155b36-b320-45cc-82b7-4f462b52b2ab", 00:22:09.042 "assigned_rate_limits": { 00:22:09.042 "rw_ios_per_sec": 0, 00:22:09.042 "rw_mbytes_per_sec": 0, 00:22:09.042 "r_mbytes_per_sec": 0, 00:22:09.042 "w_mbytes_per_sec": 0 00:22:09.042 }, 00:22:09.043 "claimed": true, 00:22:09.043 "claim_type": "exclusive_write", 00:22:09.043 "zoned": false, 00:22:09.043 "supported_io_types": { 00:22:09.043 "read": true, 00:22:09.043 "write": true, 00:22:09.043 "unmap": true, 00:22:09.043 "flush": true, 00:22:09.043 "reset": true, 00:22:09.043 "nvme_admin": false, 00:22:09.043 "nvme_io": false, 00:22:09.043 "nvme_io_md": false, 00:22:09.043 "write_zeroes": true, 00:22:09.043 "zcopy": true, 00:22:09.043 "get_zone_info": false, 00:22:09.043 "zone_management": false, 00:22:09.043 "zone_append": false, 00:22:09.043 "compare": false, 00:22:09.043 "compare_and_write": false, 00:22:09.043 "abort": true, 00:22:09.043 "seek_hole": false, 00:22:09.043 "seek_data": false, 00:22:09.043 "copy": true, 00:22:09.043 "nvme_iov_md": false 00:22:09.043 }, 00:22:09.043 "memory_domains": [ 00:22:09.043 { 00:22:09.043 "dma_device_id": "system", 00:22:09.043 "dma_device_type": 1 00:22:09.043 }, 00:22:09.043 { 00:22:09.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.043 "dma_device_type": 2 00:22:09.043 } 00:22:09.043 ], 00:22:09.043 "driver_specific": {} 00:22:09.043 } 00:22:09.043 ] 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.043 "name": "Existed_Raid", 00:22:09.043 "uuid": "94f0509c-7831-4659-b18a-7e6b1c03415f", 00:22:09.043 "strip_size_kb": 64, 00:22:09.043 "state": "online", 00:22:09.043 "raid_level": "raid5f", 00:22:09.043 "superblock": true, 00:22:09.043 "num_base_bdevs": 4, 00:22:09.043 "num_base_bdevs_discovered": 4, 00:22:09.043 "num_base_bdevs_operational": 4, 00:22:09.043 "base_bdevs_list": [ 00:22:09.043 { 00:22:09.043 "name": "BaseBdev1", 00:22:09.043 "uuid": "dfebe39d-da12-4f85-a701-6b67a1c394ca", 00:22:09.043 "is_configured": true, 00:22:09.043 "data_offset": 2048, 00:22:09.043 "data_size": 63488 00:22:09.043 }, 00:22:09.043 { 00:22:09.043 "name": "BaseBdev2", 00:22:09.043 "uuid": "5da0211d-27dd-43ee-b30a-6e70d820401d", 00:22:09.043 "is_configured": true, 00:22:09.043 "data_offset": 2048, 00:22:09.043 "data_size": 63488 00:22:09.043 }, 00:22:09.043 { 00:22:09.043 "name": "BaseBdev3", 00:22:09.043 "uuid": "09d50379-dd70-4e82-9999-2e29c0a9c26a", 00:22:09.043 "is_configured": true, 00:22:09.043 "data_offset": 2048, 00:22:09.043 "data_size": 63488 00:22:09.043 }, 00:22:09.043 { 00:22:09.043 "name": "BaseBdev4", 00:22:09.043 "uuid": "74155b36-b320-45cc-82b7-4f462b52b2ab", 00:22:09.043 "is_configured": true, 00:22:09.043 "data_offset": 2048, 00:22:09.043 "data_size": 63488 00:22:09.043 } 00:22:09.043 ] 00:22:09.043 }' 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.043 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.301 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:09.301 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:09.301 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:09.301 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:09.301 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:09.301 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.560 [2024-11-06 09:14:45.840907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:09.560 "name": "Existed_Raid", 00:22:09.560 "aliases": [ 00:22:09.560 "94f0509c-7831-4659-b18a-7e6b1c03415f" 00:22:09.560 ], 00:22:09.560 "product_name": "Raid Volume", 00:22:09.560 "block_size": 512, 00:22:09.560 "num_blocks": 190464, 00:22:09.560 "uuid": "94f0509c-7831-4659-b18a-7e6b1c03415f", 00:22:09.560 "assigned_rate_limits": { 00:22:09.560 "rw_ios_per_sec": 0, 00:22:09.560 "rw_mbytes_per_sec": 0, 00:22:09.560 "r_mbytes_per_sec": 0, 00:22:09.560 "w_mbytes_per_sec": 0 00:22:09.560 }, 00:22:09.560 "claimed": false, 00:22:09.560 "zoned": false, 00:22:09.560 "supported_io_types": { 00:22:09.560 "read": true, 00:22:09.560 "write": true, 00:22:09.560 "unmap": false, 00:22:09.560 "flush": false, 00:22:09.560 "reset": true, 00:22:09.560 "nvme_admin": false, 00:22:09.560 "nvme_io": false, 00:22:09.560 "nvme_io_md": false, 00:22:09.560 "write_zeroes": true, 00:22:09.560 "zcopy": false, 00:22:09.560 "get_zone_info": false, 00:22:09.560 "zone_management": false, 00:22:09.560 "zone_append": false, 00:22:09.560 "compare": false, 00:22:09.560 "compare_and_write": false, 00:22:09.560 "abort": false, 00:22:09.560 "seek_hole": false, 00:22:09.560 "seek_data": false, 00:22:09.560 "copy": false, 00:22:09.560 "nvme_iov_md": false 00:22:09.560 }, 00:22:09.560 "driver_specific": { 00:22:09.560 "raid": { 00:22:09.560 "uuid": "94f0509c-7831-4659-b18a-7e6b1c03415f", 00:22:09.560 "strip_size_kb": 64, 00:22:09.560 "state": "online", 00:22:09.560 "raid_level": "raid5f", 00:22:09.560 "superblock": true, 00:22:09.560 "num_base_bdevs": 4, 00:22:09.560 "num_base_bdevs_discovered": 4, 00:22:09.560 "num_base_bdevs_operational": 4, 00:22:09.560 "base_bdevs_list": [ 00:22:09.560 { 00:22:09.560 "name": "BaseBdev1", 00:22:09.560 "uuid": "dfebe39d-da12-4f85-a701-6b67a1c394ca", 00:22:09.560 "is_configured": true, 00:22:09.560 "data_offset": 2048, 00:22:09.560 "data_size": 63488 00:22:09.560 }, 00:22:09.560 { 00:22:09.560 "name": "BaseBdev2", 00:22:09.560 "uuid": "5da0211d-27dd-43ee-b30a-6e70d820401d", 00:22:09.560 "is_configured": true, 00:22:09.560 "data_offset": 2048, 00:22:09.560 "data_size": 63488 00:22:09.560 }, 00:22:09.560 { 00:22:09.560 "name": "BaseBdev3", 00:22:09.560 "uuid": "09d50379-dd70-4e82-9999-2e29c0a9c26a", 00:22:09.560 "is_configured": true, 00:22:09.560 "data_offset": 2048, 00:22:09.560 "data_size": 63488 00:22:09.560 }, 00:22:09.560 { 00:22:09.560 "name": "BaseBdev4", 00:22:09.560 "uuid": "74155b36-b320-45cc-82b7-4f462b52b2ab", 00:22:09.560 "is_configured": true, 00:22:09.560 "data_offset": 2048, 00:22:09.560 "data_size": 63488 00:22:09.560 } 00:22:09.560 ] 00:22:09.560 } 00:22:09.560 } 00:22:09.560 }' 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:09.560 BaseBdev2 00:22:09.560 BaseBdev3 00:22:09.560 BaseBdev4' 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.560 09:14:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.560 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.560 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:09.560 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:09.560 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.560 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.560 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:09.560 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.560 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.560 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.819 [2024-11-06 09:14:46.208808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:09.819 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.123 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.123 "name": "Existed_Raid", 00:22:10.123 "uuid": "94f0509c-7831-4659-b18a-7e6b1c03415f", 00:22:10.123 "strip_size_kb": 64, 00:22:10.123 "state": "online", 00:22:10.123 "raid_level": "raid5f", 00:22:10.123 "superblock": true, 00:22:10.123 "num_base_bdevs": 4, 00:22:10.123 "num_base_bdevs_discovered": 3, 00:22:10.123 "num_base_bdevs_operational": 3, 00:22:10.123 "base_bdevs_list": [ 00:22:10.123 { 00:22:10.123 "name": null, 00:22:10.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.123 "is_configured": false, 00:22:10.123 "data_offset": 0, 00:22:10.123 "data_size": 63488 00:22:10.123 }, 00:22:10.123 { 00:22:10.123 "name": "BaseBdev2", 00:22:10.123 "uuid": "5da0211d-27dd-43ee-b30a-6e70d820401d", 00:22:10.123 "is_configured": true, 00:22:10.123 "data_offset": 2048, 00:22:10.123 "data_size": 63488 00:22:10.123 }, 00:22:10.123 { 00:22:10.123 "name": "BaseBdev3", 00:22:10.123 "uuid": "09d50379-dd70-4e82-9999-2e29c0a9c26a", 00:22:10.123 "is_configured": true, 00:22:10.123 "data_offset": 2048, 00:22:10.123 "data_size": 63488 00:22:10.123 }, 00:22:10.123 { 00:22:10.123 "name": "BaseBdev4", 00:22:10.123 "uuid": "74155b36-b320-45cc-82b7-4f462b52b2ab", 00:22:10.123 "is_configured": true, 00:22:10.123 "data_offset": 2048, 00:22:10.123 "data_size": 63488 00:22:10.123 } 00:22:10.123 ] 00:22:10.123 }' 00:22:10.123 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.123 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.383 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:10.383 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:10.383 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.383 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:10.383 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.383 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.383 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.383 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:10.383 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:10.383 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:10.383 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.383 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.383 [2024-11-06 09:14:46.842974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:10.383 [2024-11-06 09:14:46.843311] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:10.641 [2024-11-06 09:14:46.927916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:10.641 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.641 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:10.641 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:10.641 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.641 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:10.641 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.641 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.641 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.641 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:10.642 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:10.642 09:14:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:22:10.642 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.642 09:14:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.642 [2024-11-06 09:14:46.991955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.642 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.642 [2024-11-06 09:14:47.133864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:10.642 [2024-11-06 09:14:47.134037] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.901 BaseBdev2 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.901 [ 00:22:10.901 { 00:22:10.901 "name": "BaseBdev2", 00:22:10.901 "aliases": [ 00:22:10.901 "027d4101-811c-4220-ad79-8ebb367ca04c" 00:22:10.901 ], 00:22:10.901 "product_name": "Malloc disk", 00:22:10.901 "block_size": 512, 00:22:10.901 "num_blocks": 65536, 00:22:10.901 "uuid": "027d4101-811c-4220-ad79-8ebb367ca04c", 00:22:10.901 "assigned_rate_limits": { 00:22:10.901 "rw_ios_per_sec": 0, 00:22:10.901 "rw_mbytes_per_sec": 0, 00:22:10.901 "r_mbytes_per_sec": 0, 00:22:10.901 "w_mbytes_per_sec": 0 00:22:10.901 }, 00:22:10.901 "claimed": false, 00:22:10.901 "zoned": false, 00:22:10.901 "supported_io_types": { 00:22:10.901 "read": true, 00:22:10.901 "write": true, 00:22:10.901 "unmap": true, 00:22:10.901 "flush": true, 00:22:10.901 "reset": true, 00:22:10.901 "nvme_admin": false, 00:22:10.901 "nvme_io": false, 00:22:10.901 "nvme_io_md": false, 00:22:10.901 "write_zeroes": true, 00:22:10.901 "zcopy": true, 00:22:10.901 "get_zone_info": false, 00:22:10.901 "zone_management": false, 00:22:10.901 "zone_append": false, 00:22:10.901 "compare": false, 00:22:10.901 "compare_and_write": false, 00:22:10.901 "abort": true, 00:22:10.901 "seek_hole": false, 00:22:10.901 "seek_data": false, 00:22:10.901 "copy": true, 00:22:10.901 "nvme_iov_md": false 00:22:10.901 }, 00:22:10.901 "memory_domains": [ 00:22:10.901 { 00:22:10.901 "dma_device_id": "system", 00:22:10.901 "dma_device_type": 1 00:22:10.901 }, 00:22:10.901 { 00:22:10.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.901 "dma_device_type": 2 00:22:10.901 } 00:22:10.901 ], 00:22:10.901 "driver_specific": {} 00:22:10.901 } 00:22:10.901 ] 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.901 BaseBdev3 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.901 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.901 [ 00:22:10.901 { 00:22:10.901 "name": "BaseBdev3", 00:22:10.901 "aliases": [ 00:22:10.901 "d60f0fd5-3171-4353-9202-b26c39376f0a" 00:22:10.901 ], 00:22:10.901 "product_name": "Malloc disk", 00:22:10.901 "block_size": 512, 00:22:10.901 "num_blocks": 65536, 00:22:10.901 "uuid": "d60f0fd5-3171-4353-9202-b26c39376f0a", 00:22:10.901 "assigned_rate_limits": { 00:22:10.901 "rw_ios_per_sec": 0, 00:22:10.901 "rw_mbytes_per_sec": 0, 00:22:10.901 "r_mbytes_per_sec": 0, 00:22:10.901 "w_mbytes_per_sec": 0 00:22:10.901 }, 00:22:10.901 "claimed": false, 00:22:10.901 "zoned": false, 00:22:10.901 "supported_io_types": { 00:22:10.901 "read": true, 00:22:10.901 "write": true, 00:22:10.901 "unmap": true, 00:22:10.901 "flush": true, 00:22:10.901 "reset": true, 00:22:10.901 "nvme_admin": false, 00:22:10.901 "nvme_io": false, 00:22:10.901 "nvme_io_md": false, 00:22:10.901 "write_zeroes": true, 00:22:10.901 "zcopy": true, 00:22:10.901 "get_zone_info": false, 00:22:10.901 "zone_management": false, 00:22:10.901 "zone_append": false, 00:22:10.901 "compare": false, 00:22:10.901 "compare_and_write": false, 00:22:10.901 "abort": true, 00:22:10.901 "seek_hole": false, 00:22:10.901 "seek_data": false, 00:22:10.901 "copy": true, 00:22:10.901 "nvme_iov_md": false 00:22:10.901 }, 00:22:10.901 "memory_domains": [ 00:22:10.901 { 00:22:10.901 "dma_device_id": "system", 00:22:10.901 "dma_device_type": 1 00:22:10.901 }, 00:22:10.901 { 00:22:10.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.902 "dma_device_type": 2 00:22:10.902 } 00:22:10.902 ], 00:22:10.902 "driver_specific": {} 00:22:10.902 } 00:22:10.902 ] 00:22:10.902 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.902 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:10.902 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:10.902 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:10.902 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:22:10.902 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.902 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.160 BaseBdev4 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.160 [ 00:22:11.160 { 00:22:11.160 "name": "BaseBdev4", 00:22:11.160 "aliases": [ 00:22:11.160 "03b5aee4-fcba-4df5-80d0-83fec9920892" 00:22:11.160 ], 00:22:11.160 "product_name": "Malloc disk", 00:22:11.160 "block_size": 512, 00:22:11.160 "num_blocks": 65536, 00:22:11.160 "uuid": "03b5aee4-fcba-4df5-80d0-83fec9920892", 00:22:11.160 "assigned_rate_limits": { 00:22:11.160 "rw_ios_per_sec": 0, 00:22:11.160 "rw_mbytes_per_sec": 0, 00:22:11.160 "r_mbytes_per_sec": 0, 00:22:11.160 "w_mbytes_per_sec": 0 00:22:11.160 }, 00:22:11.160 "claimed": false, 00:22:11.160 "zoned": false, 00:22:11.160 "supported_io_types": { 00:22:11.160 "read": true, 00:22:11.160 "write": true, 00:22:11.160 "unmap": true, 00:22:11.160 "flush": true, 00:22:11.160 "reset": true, 00:22:11.160 "nvme_admin": false, 00:22:11.160 "nvme_io": false, 00:22:11.160 "nvme_io_md": false, 00:22:11.160 "write_zeroes": true, 00:22:11.160 "zcopy": true, 00:22:11.160 "get_zone_info": false, 00:22:11.160 "zone_management": false, 00:22:11.160 "zone_append": false, 00:22:11.160 "compare": false, 00:22:11.160 "compare_and_write": false, 00:22:11.160 "abort": true, 00:22:11.160 "seek_hole": false, 00:22:11.160 "seek_data": false, 00:22:11.160 "copy": true, 00:22:11.160 "nvme_iov_md": false 00:22:11.160 }, 00:22:11.160 "memory_domains": [ 00:22:11.160 { 00:22:11.160 "dma_device_id": "system", 00:22:11.160 "dma_device_type": 1 00:22:11.160 }, 00:22:11.160 { 00:22:11.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.160 "dma_device_type": 2 00:22:11.160 } 00:22:11.160 ], 00:22:11.160 "driver_specific": {} 00:22:11.160 } 00:22:11.160 ] 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.160 [2024-11-06 09:14:47.486735] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:11.160 [2024-11-06 09:14:47.486927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:11.160 [2024-11-06 09:14:47.487064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:11.160 [2024-11-06 09:14:47.489427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:11.160 [2024-11-06 09:14:47.489614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.160 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.160 "name": "Existed_Raid", 00:22:11.160 "uuid": "ba042595-2543-4cef-9c13-1a69318de465", 00:22:11.160 "strip_size_kb": 64, 00:22:11.160 "state": "configuring", 00:22:11.160 "raid_level": "raid5f", 00:22:11.160 "superblock": true, 00:22:11.160 "num_base_bdevs": 4, 00:22:11.160 "num_base_bdevs_discovered": 3, 00:22:11.160 "num_base_bdevs_operational": 4, 00:22:11.160 "base_bdevs_list": [ 00:22:11.160 { 00:22:11.160 "name": "BaseBdev1", 00:22:11.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.161 "is_configured": false, 00:22:11.161 "data_offset": 0, 00:22:11.161 "data_size": 0 00:22:11.161 }, 00:22:11.161 { 00:22:11.161 "name": "BaseBdev2", 00:22:11.161 "uuid": "027d4101-811c-4220-ad79-8ebb367ca04c", 00:22:11.161 "is_configured": true, 00:22:11.161 "data_offset": 2048, 00:22:11.161 "data_size": 63488 00:22:11.161 }, 00:22:11.161 { 00:22:11.161 "name": "BaseBdev3", 00:22:11.161 "uuid": "d60f0fd5-3171-4353-9202-b26c39376f0a", 00:22:11.161 "is_configured": true, 00:22:11.161 "data_offset": 2048, 00:22:11.161 "data_size": 63488 00:22:11.161 }, 00:22:11.161 { 00:22:11.161 "name": "BaseBdev4", 00:22:11.161 "uuid": "03b5aee4-fcba-4df5-80d0-83fec9920892", 00:22:11.161 "is_configured": true, 00:22:11.161 "data_offset": 2048, 00:22:11.161 "data_size": 63488 00:22:11.161 } 00:22:11.161 ] 00:22:11.161 }' 00:22:11.161 09:14:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.161 09:14:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.726 [2024-11-06 09:14:48.018913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:11.726 "name": "Existed_Raid", 00:22:11.726 "uuid": "ba042595-2543-4cef-9c13-1a69318de465", 00:22:11.726 "strip_size_kb": 64, 00:22:11.726 "state": "configuring", 00:22:11.726 "raid_level": "raid5f", 00:22:11.726 "superblock": true, 00:22:11.726 "num_base_bdevs": 4, 00:22:11.726 "num_base_bdevs_discovered": 2, 00:22:11.726 "num_base_bdevs_operational": 4, 00:22:11.726 "base_bdevs_list": [ 00:22:11.726 { 00:22:11.726 "name": "BaseBdev1", 00:22:11.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.726 "is_configured": false, 00:22:11.726 "data_offset": 0, 00:22:11.726 "data_size": 0 00:22:11.726 }, 00:22:11.726 { 00:22:11.726 "name": null, 00:22:11.726 "uuid": "027d4101-811c-4220-ad79-8ebb367ca04c", 00:22:11.726 "is_configured": false, 00:22:11.726 "data_offset": 0, 00:22:11.726 "data_size": 63488 00:22:11.726 }, 00:22:11.726 { 00:22:11.726 "name": "BaseBdev3", 00:22:11.726 "uuid": "d60f0fd5-3171-4353-9202-b26c39376f0a", 00:22:11.726 "is_configured": true, 00:22:11.726 "data_offset": 2048, 00:22:11.726 "data_size": 63488 00:22:11.726 }, 00:22:11.726 { 00:22:11.726 "name": "BaseBdev4", 00:22:11.726 "uuid": "03b5aee4-fcba-4df5-80d0-83fec9920892", 00:22:11.726 "is_configured": true, 00:22:11.726 "data_offset": 2048, 00:22:11.726 "data_size": 63488 00:22:11.726 } 00:22:11.726 ] 00:22:11.726 }' 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:11.726 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.984 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:11.984 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:11.984 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:11.984 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:11.984 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.242 [2024-11-06 09:14:48.556265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:12.242 BaseBdev1 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.242 [ 00:22:12.242 { 00:22:12.242 "name": "BaseBdev1", 00:22:12.242 "aliases": [ 00:22:12.242 "578c6b0c-c102-401b-8dd6-5873e7b21f93" 00:22:12.242 ], 00:22:12.242 "product_name": "Malloc disk", 00:22:12.242 "block_size": 512, 00:22:12.242 "num_blocks": 65536, 00:22:12.242 "uuid": "578c6b0c-c102-401b-8dd6-5873e7b21f93", 00:22:12.242 "assigned_rate_limits": { 00:22:12.242 "rw_ios_per_sec": 0, 00:22:12.242 "rw_mbytes_per_sec": 0, 00:22:12.242 "r_mbytes_per_sec": 0, 00:22:12.242 "w_mbytes_per_sec": 0 00:22:12.242 }, 00:22:12.242 "claimed": true, 00:22:12.242 "claim_type": "exclusive_write", 00:22:12.242 "zoned": false, 00:22:12.242 "supported_io_types": { 00:22:12.242 "read": true, 00:22:12.242 "write": true, 00:22:12.242 "unmap": true, 00:22:12.242 "flush": true, 00:22:12.242 "reset": true, 00:22:12.242 "nvme_admin": false, 00:22:12.242 "nvme_io": false, 00:22:12.242 "nvme_io_md": false, 00:22:12.242 "write_zeroes": true, 00:22:12.242 "zcopy": true, 00:22:12.242 "get_zone_info": false, 00:22:12.242 "zone_management": false, 00:22:12.242 "zone_append": false, 00:22:12.242 "compare": false, 00:22:12.242 "compare_and_write": false, 00:22:12.242 "abort": true, 00:22:12.242 "seek_hole": false, 00:22:12.242 "seek_data": false, 00:22:12.242 "copy": true, 00:22:12.242 "nvme_iov_md": false 00:22:12.242 }, 00:22:12.242 "memory_domains": [ 00:22:12.242 { 00:22:12.242 "dma_device_id": "system", 00:22:12.242 "dma_device_type": 1 00:22:12.242 }, 00:22:12.242 { 00:22:12.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.242 "dma_device_type": 2 00:22:12.242 } 00:22:12.242 ], 00:22:12.242 "driver_specific": {} 00:22:12.242 } 00:22:12.242 ] 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.242 "name": "Existed_Raid", 00:22:12.242 "uuid": "ba042595-2543-4cef-9c13-1a69318de465", 00:22:12.242 "strip_size_kb": 64, 00:22:12.242 "state": "configuring", 00:22:12.242 "raid_level": "raid5f", 00:22:12.242 "superblock": true, 00:22:12.242 "num_base_bdevs": 4, 00:22:12.242 "num_base_bdevs_discovered": 3, 00:22:12.242 "num_base_bdevs_operational": 4, 00:22:12.242 "base_bdevs_list": [ 00:22:12.242 { 00:22:12.242 "name": "BaseBdev1", 00:22:12.242 "uuid": "578c6b0c-c102-401b-8dd6-5873e7b21f93", 00:22:12.242 "is_configured": true, 00:22:12.242 "data_offset": 2048, 00:22:12.242 "data_size": 63488 00:22:12.242 }, 00:22:12.242 { 00:22:12.242 "name": null, 00:22:12.242 "uuid": "027d4101-811c-4220-ad79-8ebb367ca04c", 00:22:12.242 "is_configured": false, 00:22:12.242 "data_offset": 0, 00:22:12.242 "data_size": 63488 00:22:12.242 }, 00:22:12.242 { 00:22:12.242 "name": "BaseBdev3", 00:22:12.242 "uuid": "d60f0fd5-3171-4353-9202-b26c39376f0a", 00:22:12.242 "is_configured": true, 00:22:12.242 "data_offset": 2048, 00:22:12.242 "data_size": 63488 00:22:12.242 }, 00:22:12.242 { 00:22:12.242 "name": "BaseBdev4", 00:22:12.242 "uuid": "03b5aee4-fcba-4df5-80d0-83fec9920892", 00:22:12.242 "is_configured": true, 00:22:12.242 "data_offset": 2048, 00:22:12.242 "data_size": 63488 00:22:12.242 } 00:22:12.242 ] 00:22:12.242 }' 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.242 09:14:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.809 [2024-11-06 09:14:49.120457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:12.809 "name": "Existed_Raid", 00:22:12.809 "uuid": "ba042595-2543-4cef-9c13-1a69318de465", 00:22:12.809 "strip_size_kb": 64, 00:22:12.809 "state": "configuring", 00:22:12.809 "raid_level": "raid5f", 00:22:12.809 "superblock": true, 00:22:12.809 "num_base_bdevs": 4, 00:22:12.809 "num_base_bdevs_discovered": 2, 00:22:12.809 "num_base_bdevs_operational": 4, 00:22:12.809 "base_bdevs_list": [ 00:22:12.809 { 00:22:12.809 "name": "BaseBdev1", 00:22:12.809 "uuid": "578c6b0c-c102-401b-8dd6-5873e7b21f93", 00:22:12.809 "is_configured": true, 00:22:12.809 "data_offset": 2048, 00:22:12.809 "data_size": 63488 00:22:12.809 }, 00:22:12.809 { 00:22:12.809 "name": null, 00:22:12.809 "uuid": "027d4101-811c-4220-ad79-8ebb367ca04c", 00:22:12.809 "is_configured": false, 00:22:12.809 "data_offset": 0, 00:22:12.809 "data_size": 63488 00:22:12.809 }, 00:22:12.809 { 00:22:12.809 "name": null, 00:22:12.809 "uuid": "d60f0fd5-3171-4353-9202-b26c39376f0a", 00:22:12.809 "is_configured": false, 00:22:12.809 "data_offset": 0, 00:22:12.809 "data_size": 63488 00:22:12.809 }, 00:22:12.809 { 00:22:12.809 "name": "BaseBdev4", 00:22:12.809 "uuid": "03b5aee4-fcba-4df5-80d0-83fec9920892", 00:22:12.809 "is_configured": true, 00:22:12.809 "data_offset": 2048, 00:22:12.809 "data_size": 63488 00:22:12.809 } 00:22:12.809 ] 00:22:12.809 }' 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:12.809 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.374 [2024-11-06 09:14:49.688611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.374 "name": "Existed_Raid", 00:22:13.374 "uuid": "ba042595-2543-4cef-9c13-1a69318de465", 00:22:13.374 "strip_size_kb": 64, 00:22:13.374 "state": "configuring", 00:22:13.374 "raid_level": "raid5f", 00:22:13.374 "superblock": true, 00:22:13.374 "num_base_bdevs": 4, 00:22:13.374 "num_base_bdevs_discovered": 3, 00:22:13.374 "num_base_bdevs_operational": 4, 00:22:13.374 "base_bdevs_list": [ 00:22:13.374 { 00:22:13.374 "name": "BaseBdev1", 00:22:13.374 "uuid": "578c6b0c-c102-401b-8dd6-5873e7b21f93", 00:22:13.374 "is_configured": true, 00:22:13.374 "data_offset": 2048, 00:22:13.374 "data_size": 63488 00:22:13.374 }, 00:22:13.374 { 00:22:13.374 "name": null, 00:22:13.374 "uuid": "027d4101-811c-4220-ad79-8ebb367ca04c", 00:22:13.374 "is_configured": false, 00:22:13.374 "data_offset": 0, 00:22:13.374 "data_size": 63488 00:22:13.374 }, 00:22:13.374 { 00:22:13.374 "name": "BaseBdev3", 00:22:13.374 "uuid": "d60f0fd5-3171-4353-9202-b26c39376f0a", 00:22:13.374 "is_configured": true, 00:22:13.374 "data_offset": 2048, 00:22:13.374 "data_size": 63488 00:22:13.374 }, 00:22:13.374 { 00:22:13.374 "name": "BaseBdev4", 00:22:13.374 "uuid": "03b5aee4-fcba-4df5-80d0-83fec9920892", 00:22:13.374 "is_configured": true, 00:22:13.374 "data_offset": 2048, 00:22:13.374 "data_size": 63488 00:22:13.374 } 00:22:13.374 ] 00:22:13.374 }' 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.374 09:14:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.939 [2024-11-06 09:14:50.252837] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.939 "name": "Existed_Raid", 00:22:13.939 "uuid": "ba042595-2543-4cef-9c13-1a69318de465", 00:22:13.939 "strip_size_kb": 64, 00:22:13.939 "state": "configuring", 00:22:13.939 "raid_level": "raid5f", 00:22:13.939 "superblock": true, 00:22:13.939 "num_base_bdevs": 4, 00:22:13.939 "num_base_bdevs_discovered": 2, 00:22:13.939 "num_base_bdevs_operational": 4, 00:22:13.939 "base_bdevs_list": [ 00:22:13.939 { 00:22:13.939 "name": null, 00:22:13.939 "uuid": "578c6b0c-c102-401b-8dd6-5873e7b21f93", 00:22:13.939 "is_configured": false, 00:22:13.939 "data_offset": 0, 00:22:13.939 "data_size": 63488 00:22:13.939 }, 00:22:13.939 { 00:22:13.939 "name": null, 00:22:13.939 "uuid": "027d4101-811c-4220-ad79-8ebb367ca04c", 00:22:13.939 "is_configured": false, 00:22:13.939 "data_offset": 0, 00:22:13.939 "data_size": 63488 00:22:13.939 }, 00:22:13.939 { 00:22:13.939 "name": "BaseBdev3", 00:22:13.939 "uuid": "d60f0fd5-3171-4353-9202-b26c39376f0a", 00:22:13.939 "is_configured": true, 00:22:13.939 "data_offset": 2048, 00:22:13.939 "data_size": 63488 00:22:13.939 }, 00:22:13.939 { 00:22:13.939 "name": "BaseBdev4", 00:22:13.939 "uuid": "03b5aee4-fcba-4df5-80d0-83fec9920892", 00:22:13.939 "is_configured": true, 00:22:13.939 "data_offset": 2048, 00:22:13.939 "data_size": 63488 00:22:13.939 } 00:22:13.939 ] 00:22:13.939 }' 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.939 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.504 [2024-11-06 09:14:50.930030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.504 "name": "Existed_Raid", 00:22:14.504 "uuid": "ba042595-2543-4cef-9c13-1a69318de465", 00:22:14.504 "strip_size_kb": 64, 00:22:14.504 "state": "configuring", 00:22:14.504 "raid_level": "raid5f", 00:22:14.504 "superblock": true, 00:22:14.504 "num_base_bdevs": 4, 00:22:14.504 "num_base_bdevs_discovered": 3, 00:22:14.504 "num_base_bdevs_operational": 4, 00:22:14.504 "base_bdevs_list": [ 00:22:14.504 { 00:22:14.504 "name": null, 00:22:14.504 "uuid": "578c6b0c-c102-401b-8dd6-5873e7b21f93", 00:22:14.504 "is_configured": false, 00:22:14.504 "data_offset": 0, 00:22:14.504 "data_size": 63488 00:22:14.504 }, 00:22:14.504 { 00:22:14.504 "name": "BaseBdev2", 00:22:14.504 "uuid": "027d4101-811c-4220-ad79-8ebb367ca04c", 00:22:14.504 "is_configured": true, 00:22:14.504 "data_offset": 2048, 00:22:14.504 "data_size": 63488 00:22:14.504 }, 00:22:14.504 { 00:22:14.504 "name": "BaseBdev3", 00:22:14.504 "uuid": "d60f0fd5-3171-4353-9202-b26c39376f0a", 00:22:14.504 "is_configured": true, 00:22:14.504 "data_offset": 2048, 00:22:14.504 "data_size": 63488 00:22:14.504 }, 00:22:14.504 { 00:22:14.504 "name": "BaseBdev4", 00:22:14.504 "uuid": "03b5aee4-fcba-4df5-80d0-83fec9920892", 00:22:14.504 "is_configured": true, 00:22:14.504 "data_offset": 2048, 00:22:14.504 "data_size": 63488 00:22:14.504 } 00:22:14.504 ] 00:22:14.504 }' 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.504 09:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 578c6b0c-c102-401b-8dd6-5873e7b21f93 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.069 [2024-11-06 09:14:51.539629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:15.069 NewBaseBdev 00:22:15.069 [2024-11-06 09:14:51.540132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:15.069 [2024-11-06 09:14:51.540158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:15.069 [2024-11-06 09:14:51.540480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.069 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.070 [2024-11-06 09:14:51.546951] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:15.070 [2024-11-06 09:14:51.546982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:15.070 [2024-11-06 09:14:51.547266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.070 [ 00:22:15.070 { 00:22:15.070 "name": "NewBaseBdev", 00:22:15.070 "aliases": [ 00:22:15.070 "578c6b0c-c102-401b-8dd6-5873e7b21f93" 00:22:15.070 ], 00:22:15.070 "product_name": "Malloc disk", 00:22:15.070 "block_size": 512, 00:22:15.070 "num_blocks": 65536, 00:22:15.070 "uuid": "578c6b0c-c102-401b-8dd6-5873e7b21f93", 00:22:15.070 "assigned_rate_limits": { 00:22:15.070 "rw_ios_per_sec": 0, 00:22:15.070 "rw_mbytes_per_sec": 0, 00:22:15.070 "r_mbytes_per_sec": 0, 00:22:15.070 "w_mbytes_per_sec": 0 00:22:15.070 }, 00:22:15.070 "claimed": true, 00:22:15.070 "claim_type": "exclusive_write", 00:22:15.070 "zoned": false, 00:22:15.070 "supported_io_types": { 00:22:15.070 "read": true, 00:22:15.070 "write": true, 00:22:15.070 "unmap": true, 00:22:15.070 "flush": true, 00:22:15.070 "reset": true, 00:22:15.070 "nvme_admin": false, 00:22:15.070 "nvme_io": false, 00:22:15.070 "nvme_io_md": false, 00:22:15.070 "write_zeroes": true, 00:22:15.070 "zcopy": true, 00:22:15.070 "get_zone_info": false, 00:22:15.070 "zone_management": false, 00:22:15.070 "zone_append": false, 00:22:15.070 "compare": false, 00:22:15.070 "compare_and_write": false, 00:22:15.070 "abort": true, 00:22:15.070 "seek_hole": false, 00:22:15.070 "seek_data": false, 00:22:15.070 "copy": true, 00:22:15.070 "nvme_iov_md": false 00:22:15.070 }, 00:22:15.070 "memory_domains": [ 00:22:15.070 { 00:22:15.070 "dma_device_id": "system", 00:22:15.070 "dma_device_type": 1 00:22:15.070 }, 00:22:15.070 { 00:22:15.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.070 "dma_device_type": 2 00:22:15.070 } 00:22:15.070 ], 00:22:15.070 "driver_specific": {} 00:22:15.070 } 00:22:15.070 ] 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.070 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.327 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.327 "name": "Existed_Raid", 00:22:15.327 "uuid": "ba042595-2543-4cef-9c13-1a69318de465", 00:22:15.327 "strip_size_kb": 64, 00:22:15.327 "state": "online", 00:22:15.327 "raid_level": "raid5f", 00:22:15.327 "superblock": true, 00:22:15.327 "num_base_bdevs": 4, 00:22:15.327 "num_base_bdevs_discovered": 4, 00:22:15.327 "num_base_bdevs_operational": 4, 00:22:15.327 "base_bdevs_list": [ 00:22:15.327 { 00:22:15.327 "name": "NewBaseBdev", 00:22:15.327 "uuid": "578c6b0c-c102-401b-8dd6-5873e7b21f93", 00:22:15.327 "is_configured": true, 00:22:15.327 "data_offset": 2048, 00:22:15.327 "data_size": 63488 00:22:15.327 }, 00:22:15.327 { 00:22:15.327 "name": "BaseBdev2", 00:22:15.327 "uuid": "027d4101-811c-4220-ad79-8ebb367ca04c", 00:22:15.327 "is_configured": true, 00:22:15.327 "data_offset": 2048, 00:22:15.327 "data_size": 63488 00:22:15.327 }, 00:22:15.327 { 00:22:15.327 "name": "BaseBdev3", 00:22:15.327 "uuid": "d60f0fd5-3171-4353-9202-b26c39376f0a", 00:22:15.327 "is_configured": true, 00:22:15.327 "data_offset": 2048, 00:22:15.327 "data_size": 63488 00:22:15.327 }, 00:22:15.327 { 00:22:15.328 "name": "BaseBdev4", 00:22:15.328 "uuid": "03b5aee4-fcba-4df5-80d0-83fec9920892", 00:22:15.328 "is_configured": true, 00:22:15.328 "data_offset": 2048, 00:22:15.328 "data_size": 63488 00:22:15.328 } 00:22:15.328 ] 00:22:15.328 }' 00:22:15.328 09:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.328 09:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.595 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:22:15.595 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:15.595 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:15.595 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:15.595 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:22:15.595 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:15.595 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:15.595 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.595 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.595 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:15.595 [2024-11-06 09:14:52.078930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.595 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.867 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:15.867 "name": "Existed_Raid", 00:22:15.867 "aliases": [ 00:22:15.867 "ba042595-2543-4cef-9c13-1a69318de465" 00:22:15.867 ], 00:22:15.867 "product_name": "Raid Volume", 00:22:15.867 "block_size": 512, 00:22:15.867 "num_blocks": 190464, 00:22:15.867 "uuid": "ba042595-2543-4cef-9c13-1a69318de465", 00:22:15.867 "assigned_rate_limits": { 00:22:15.867 "rw_ios_per_sec": 0, 00:22:15.867 "rw_mbytes_per_sec": 0, 00:22:15.867 "r_mbytes_per_sec": 0, 00:22:15.867 "w_mbytes_per_sec": 0 00:22:15.867 }, 00:22:15.867 "claimed": false, 00:22:15.867 "zoned": false, 00:22:15.867 "supported_io_types": { 00:22:15.867 "read": true, 00:22:15.867 "write": true, 00:22:15.867 "unmap": false, 00:22:15.867 "flush": false, 00:22:15.867 "reset": true, 00:22:15.867 "nvme_admin": false, 00:22:15.867 "nvme_io": false, 00:22:15.867 "nvme_io_md": false, 00:22:15.867 "write_zeroes": true, 00:22:15.867 "zcopy": false, 00:22:15.867 "get_zone_info": false, 00:22:15.867 "zone_management": false, 00:22:15.867 "zone_append": false, 00:22:15.867 "compare": false, 00:22:15.867 "compare_and_write": false, 00:22:15.867 "abort": false, 00:22:15.867 "seek_hole": false, 00:22:15.867 "seek_data": false, 00:22:15.867 "copy": false, 00:22:15.868 "nvme_iov_md": false 00:22:15.868 }, 00:22:15.868 "driver_specific": { 00:22:15.868 "raid": { 00:22:15.868 "uuid": "ba042595-2543-4cef-9c13-1a69318de465", 00:22:15.868 "strip_size_kb": 64, 00:22:15.868 "state": "online", 00:22:15.868 "raid_level": "raid5f", 00:22:15.868 "superblock": true, 00:22:15.868 "num_base_bdevs": 4, 00:22:15.868 "num_base_bdevs_discovered": 4, 00:22:15.868 "num_base_bdevs_operational": 4, 00:22:15.868 "base_bdevs_list": [ 00:22:15.868 { 00:22:15.868 "name": "NewBaseBdev", 00:22:15.868 "uuid": "578c6b0c-c102-401b-8dd6-5873e7b21f93", 00:22:15.868 "is_configured": true, 00:22:15.868 "data_offset": 2048, 00:22:15.868 "data_size": 63488 00:22:15.868 }, 00:22:15.868 { 00:22:15.868 "name": "BaseBdev2", 00:22:15.868 "uuid": "027d4101-811c-4220-ad79-8ebb367ca04c", 00:22:15.868 "is_configured": true, 00:22:15.868 "data_offset": 2048, 00:22:15.868 "data_size": 63488 00:22:15.868 }, 00:22:15.868 { 00:22:15.868 "name": "BaseBdev3", 00:22:15.868 "uuid": "d60f0fd5-3171-4353-9202-b26c39376f0a", 00:22:15.868 "is_configured": true, 00:22:15.868 "data_offset": 2048, 00:22:15.868 "data_size": 63488 00:22:15.868 }, 00:22:15.868 { 00:22:15.868 "name": "BaseBdev4", 00:22:15.868 "uuid": "03b5aee4-fcba-4df5-80d0-83fec9920892", 00:22:15.868 "is_configured": true, 00:22:15.868 "data_offset": 2048, 00:22:15.868 "data_size": 63488 00:22:15.868 } 00:22:15.868 ] 00:22:15.868 } 00:22:15.868 } 00:22:15.868 }' 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:22:15.868 BaseBdev2 00:22:15.868 BaseBdev3 00:22:15.868 BaseBdev4' 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:15.868 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:16.126 [2024-11-06 09:14:52.482723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:16.126 [2024-11-06 09:14:52.482914] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.126 [2024-11-06 09:14:52.483026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.126 [2024-11-06 09:14:52.483401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.126 [2024-11-06 09:14:52.483419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83748 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 83748 ']' 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 83748 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83748 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83748' 00:22:16.126 killing process with pid 83748 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 83748 00:22:16.126 [2024-11-06 09:14:52.524838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:16.126 09:14:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 83748 00:22:16.384 [2024-11-06 09:14:52.875438] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:17.758 09:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:22:17.758 ************************************ 00:22:17.758 END TEST raid5f_state_function_test_sb 00:22:17.758 ************************************ 00:22:17.758 00:22:17.758 real 0m12.504s 00:22:17.758 user 0m20.757s 00:22:17.758 sys 0m1.716s 00:22:17.758 09:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:17.758 09:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:17.758 09:14:53 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:22:17.758 09:14:53 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:22:17.758 09:14:53 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:17.758 09:14:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:17.758 ************************************ 00:22:17.758 START TEST raid5f_superblock_test 00:22:17.758 ************************************ 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84424 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84424 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84424 ']' 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.758 09:14:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:17.759 09:14:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.759 [2024-11-06 09:14:54.070238] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:22:17.759 [2024-11-06 09:14:54.070672] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84424 ] 00:22:17.759 [2024-11-06 09:14:54.248198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.017 [2024-11-06 09:14:54.371953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.275 [2024-11-06 09:14:54.572850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:18.275 [2024-11-06 09:14:54.573078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.534 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.793 malloc1 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.793 [2024-11-06 09:14:55.079402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:18.793 [2024-11-06 09:14:55.079611] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.793 [2024-11-06 09:14:55.079658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:18.793 [2024-11-06 09:14:55.079686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.793 [2024-11-06 09:14:55.082391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.793 [2024-11-06 09:14:55.082435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:18.793 pt1 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.793 malloc2 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.793 [2024-11-06 09:14:55.134840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:18.793 [2024-11-06 09:14:55.135027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.793 [2024-11-06 09:14:55.135102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:18.793 [2024-11-06 09:14:55.135207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.793 [2024-11-06 09:14:55.137972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.793 [2024-11-06 09:14:55.138122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:18.793 pt2 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:18.793 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.794 malloc3 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.794 [2024-11-06 09:14:55.204117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:18.794 [2024-11-06 09:14:55.204305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.794 [2024-11-06 09:14:55.204383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:18.794 [2024-11-06 09:14:55.204491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.794 [2024-11-06 09:14:55.207219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.794 [2024-11-06 09:14:55.207264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:18.794 pt3 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.794 malloc4 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.794 [2024-11-06 09:14:55.259622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:18.794 [2024-11-06 09:14:55.259803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.794 [2024-11-06 09:14:55.259890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:18.794 [2024-11-06 09:14:55.260013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.794 [2024-11-06 09:14:55.262799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.794 [2024-11-06 09:14:55.262855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:18.794 pt4 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.794 [2024-11-06 09:14:55.267735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:18.794 [2024-11-06 09:14:55.270189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:18.794 [2024-11-06 09:14:55.270397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:18.794 [2024-11-06 09:14:55.270634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:18.794 [2024-11-06 09:14:55.271022] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:18.794 [2024-11-06 09:14:55.271150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:18.794 [2024-11-06 09:14:55.271503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:18.794 [2024-11-06 09:14:55.278297] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:18.794 [2024-11-06 09:14:55.278344] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:18.794 [2024-11-06 09:14:55.278586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.794 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.053 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:19.053 "name": "raid_bdev1", 00:22:19.053 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:19.053 "strip_size_kb": 64, 00:22:19.053 "state": "online", 00:22:19.053 "raid_level": "raid5f", 00:22:19.053 "superblock": true, 00:22:19.053 "num_base_bdevs": 4, 00:22:19.053 "num_base_bdevs_discovered": 4, 00:22:19.053 "num_base_bdevs_operational": 4, 00:22:19.053 "base_bdevs_list": [ 00:22:19.053 { 00:22:19.053 "name": "pt1", 00:22:19.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.053 "is_configured": true, 00:22:19.053 "data_offset": 2048, 00:22:19.053 "data_size": 63488 00:22:19.053 }, 00:22:19.053 { 00:22:19.053 "name": "pt2", 00:22:19.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.053 "is_configured": true, 00:22:19.053 "data_offset": 2048, 00:22:19.053 "data_size": 63488 00:22:19.053 }, 00:22:19.053 { 00:22:19.053 "name": "pt3", 00:22:19.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.053 "is_configured": true, 00:22:19.053 "data_offset": 2048, 00:22:19.053 "data_size": 63488 00:22:19.053 }, 00:22:19.053 { 00:22:19.053 "name": "pt4", 00:22:19.053 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:19.053 "is_configured": true, 00:22:19.053 "data_offset": 2048, 00:22:19.053 "data_size": 63488 00:22:19.053 } 00:22:19.053 ] 00:22:19.053 }' 00:22:19.053 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:19.053 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.311 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:19.311 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:19.311 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:19.311 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:19.311 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:19.311 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:19.311 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:19.311 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.311 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.311 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:19.311 [2024-11-06 09:14:55.826164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.311 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.570 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:19.570 "name": "raid_bdev1", 00:22:19.570 "aliases": [ 00:22:19.570 "89059852-292f-414c-a496-bc9bcb8b3cf9" 00:22:19.570 ], 00:22:19.570 "product_name": "Raid Volume", 00:22:19.570 "block_size": 512, 00:22:19.570 "num_blocks": 190464, 00:22:19.570 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:19.570 "assigned_rate_limits": { 00:22:19.570 "rw_ios_per_sec": 0, 00:22:19.570 "rw_mbytes_per_sec": 0, 00:22:19.570 "r_mbytes_per_sec": 0, 00:22:19.570 "w_mbytes_per_sec": 0 00:22:19.570 }, 00:22:19.570 "claimed": false, 00:22:19.570 "zoned": false, 00:22:19.570 "supported_io_types": { 00:22:19.570 "read": true, 00:22:19.570 "write": true, 00:22:19.570 "unmap": false, 00:22:19.570 "flush": false, 00:22:19.570 "reset": true, 00:22:19.570 "nvme_admin": false, 00:22:19.570 "nvme_io": false, 00:22:19.570 "nvme_io_md": false, 00:22:19.570 "write_zeroes": true, 00:22:19.570 "zcopy": false, 00:22:19.570 "get_zone_info": false, 00:22:19.570 "zone_management": false, 00:22:19.570 "zone_append": false, 00:22:19.570 "compare": false, 00:22:19.570 "compare_and_write": false, 00:22:19.570 "abort": false, 00:22:19.570 "seek_hole": false, 00:22:19.570 "seek_data": false, 00:22:19.570 "copy": false, 00:22:19.570 "nvme_iov_md": false 00:22:19.570 }, 00:22:19.570 "driver_specific": { 00:22:19.570 "raid": { 00:22:19.570 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:19.570 "strip_size_kb": 64, 00:22:19.570 "state": "online", 00:22:19.570 "raid_level": "raid5f", 00:22:19.570 "superblock": true, 00:22:19.570 "num_base_bdevs": 4, 00:22:19.570 "num_base_bdevs_discovered": 4, 00:22:19.570 "num_base_bdevs_operational": 4, 00:22:19.570 "base_bdevs_list": [ 00:22:19.570 { 00:22:19.570 "name": "pt1", 00:22:19.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:19.570 "is_configured": true, 00:22:19.570 "data_offset": 2048, 00:22:19.570 "data_size": 63488 00:22:19.570 }, 00:22:19.570 { 00:22:19.570 "name": "pt2", 00:22:19.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:19.570 "is_configured": true, 00:22:19.570 "data_offset": 2048, 00:22:19.570 "data_size": 63488 00:22:19.570 }, 00:22:19.570 { 00:22:19.570 "name": "pt3", 00:22:19.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:19.570 "is_configured": true, 00:22:19.570 "data_offset": 2048, 00:22:19.570 "data_size": 63488 00:22:19.570 }, 00:22:19.570 { 00:22:19.570 "name": "pt4", 00:22:19.570 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:19.570 "is_configured": true, 00:22:19.570 "data_offset": 2048, 00:22:19.570 "data_size": 63488 00:22:19.570 } 00:22:19.570 ] 00:22:19.570 } 00:22:19.570 } 00:22:19.570 }' 00:22:19.570 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:19.570 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:19.570 pt2 00:22:19.570 pt3 00:22:19.570 pt4' 00:22:19.570 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.570 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:19.570 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.571 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:19.571 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.571 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.571 09:14:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.571 09:14:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.571 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.829 [2024-11-06 09:14:56.202242] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=89059852-292f-414c-a496-bc9bcb8b3cf9 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 89059852-292f-414c-a496-bc9bcb8b3cf9 ']' 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:19.829 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.830 [2024-11-06 09:14:56.246004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:19.830 [2024-11-06 09:14:56.246154] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:19.830 [2024-11-06 09:14:56.246352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:19.830 [2024-11-06 09:14:56.246472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:19.830 [2024-11-06 09:14:56.246497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:19.830 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.088 [2024-11-06 09:14:56.382060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:20.088 [2024-11-06 09:14:56.384559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:20.088 [2024-11-06 09:14:56.384626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:20.088 [2024-11-06 09:14:56.384679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:20.088 [2024-11-06 09:14:56.384746] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:20.088 [2024-11-06 09:14:56.384825] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:20.088 [2024-11-06 09:14:56.384867] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:20.088 [2024-11-06 09:14:56.384900] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:20.088 [2024-11-06 09:14:56.384923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:20.088 [2024-11-06 09:14:56.384939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:20.088 request: 00:22:20.088 { 00:22:20.088 "name": "raid_bdev1", 00:22:20.088 "raid_level": "raid5f", 00:22:20.088 "base_bdevs": [ 00:22:20.088 "malloc1", 00:22:20.088 "malloc2", 00:22:20.088 "malloc3", 00:22:20.088 "malloc4" 00:22:20.088 ], 00:22:20.088 "strip_size_kb": 64, 00:22:20.088 "superblock": false, 00:22:20.088 "method": "bdev_raid_create", 00:22:20.088 "req_id": 1 00:22:20.088 } 00:22:20.088 Got JSON-RPC error response 00:22:20.088 response: 00:22:20.088 { 00:22:20.088 "code": -17, 00:22:20.088 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:20.088 } 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:20.088 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.089 [2024-11-06 09:14:56.438053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:20.089 [2024-11-06 09:14:56.438227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.089 [2024-11-06 09:14:56.438370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:20.089 [2024-11-06 09:14:56.438483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.089 [2024-11-06 09:14:56.441360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.089 [2024-11-06 09:14:56.441518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:20.089 [2024-11-06 09:14:56.441705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:20.089 [2024-11-06 09:14:56.441908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:20.089 pt1 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.089 "name": "raid_bdev1", 00:22:20.089 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:20.089 "strip_size_kb": 64, 00:22:20.089 "state": "configuring", 00:22:20.089 "raid_level": "raid5f", 00:22:20.089 "superblock": true, 00:22:20.089 "num_base_bdevs": 4, 00:22:20.089 "num_base_bdevs_discovered": 1, 00:22:20.089 "num_base_bdevs_operational": 4, 00:22:20.089 "base_bdevs_list": [ 00:22:20.089 { 00:22:20.089 "name": "pt1", 00:22:20.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:20.089 "is_configured": true, 00:22:20.089 "data_offset": 2048, 00:22:20.089 "data_size": 63488 00:22:20.089 }, 00:22:20.089 { 00:22:20.089 "name": null, 00:22:20.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.089 "is_configured": false, 00:22:20.089 "data_offset": 2048, 00:22:20.089 "data_size": 63488 00:22:20.089 }, 00:22:20.089 { 00:22:20.089 "name": null, 00:22:20.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.089 "is_configured": false, 00:22:20.089 "data_offset": 2048, 00:22:20.089 "data_size": 63488 00:22:20.089 }, 00:22:20.089 { 00:22:20.089 "name": null, 00:22:20.089 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:20.089 "is_configured": false, 00:22:20.089 "data_offset": 2048, 00:22:20.089 "data_size": 63488 00:22:20.089 } 00:22:20.089 ] 00:22:20.089 }' 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.089 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.656 [2024-11-06 09:14:56.954437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:20.656 [2024-11-06 09:14:56.954666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.656 [2024-11-06 09:14:56.954707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:20.656 [2024-11-06 09:14:56.954727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.656 [2024-11-06 09:14:56.955287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.656 [2024-11-06 09:14:56.955328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:20.656 [2024-11-06 09:14:56.955429] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:20.656 [2024-11-06 09:14:56.955465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:20.656 pt2 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.656 [2024-11-06 09:14:56.962426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:20.656 09:14:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:20.656 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.656 "name": "raid_bdev1", 00:22:20.656 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:20.656 "strip_size_kb": 64, 00:22:20.656 "state": "configuring", 00:22:20.656 "raid_level": "raid5f", 00:22:20.656 "superblock": true, 00:22:20.656 "num_base_bdevs": 4, 00:22:20.656 "num_base_bdevs_discovered": 1, 00:22:20.656 "num_base_bdevs_operational": 4, 00:22:20.656 "base_bdevs_list": [ 00:22:20.656 { 00:22:20.656 "name": "pt1", 00:22:20.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:20.656 "is_configured": true, 00:22:20.656 "data_offset": 2048, 00:22:20.656 "data_size": 63488 00:22:20.656 }, 00:22:20.656 { 00:22:20.656 "name": null, 00:22:20.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:20.657 "is_configured": false, 00:22:20.657 "data_offset": 0, 00:22:20.657 "data_size": 63488 00:22:20.657 }, 00:22:20.657 { 00:22:20.657 "name": null, 00:22:20.657 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:20.657 "is_configured": false, 00:22:20.657 "data_offset": 2048, 00:22:20.657 "data_size": 63488 00:22:20.657 }, 00:22:20.657 { 00:22:20.657 "name": null, 00:22:20.657 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:20.657 "is_configured": false, 00:22:20.657 "data_offset": 2048, 00:22:20.657 "data_size": 63488 00:22:20.657 } 00:22:20.657 ] 00:22:20.657 }' 00:22:20.657 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.657 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.234 [2024-11-06 09:14:57.470584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:21.234 [2024-11-06 09:14:57.470799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.234 [2024-11-06 09:14:57.471013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:21.234 [2024-11-06 09:14:57.471040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.234 [2024-11-06 09:14:57.471596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.234 [2024-11-06 09:14:57.471633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:21.234 [2024-11-06 09:14:57.471739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:21.234 [2024-11-06 09:14:57.471779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:21.234 pt2 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.234 [2024-11-06 09:14:57.478541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:21.234 [2024-11-06 09:14:57.478757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.234 [2024-11-06 09:14:57.478853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:21.234 [2024-11-06 09:14:57.479022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.234 [2024-11-06 09:14:57.479502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.234 [2024-11-06 09:14:57.479654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:21.234 [2024-11-06 09:14:57.479876] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:21.234 [2024-11-06 09:14:57.480021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:21.234 pt3 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.234 [2024-11-06 09:14:57.486517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:21.234 [2024-11-06 09:14:57.486705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.234 [2024-11-06 09:14:57.486744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:21.234 [2024-11-06 09:14:57.486759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.234 [2024-11-06 09:14:57.487234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.234 [2024-11-06 09:14:57.487269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:21.234 [2024-11-06 09:14:57.487351] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:21.234 [2024-11-06 09:14:57.487379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:21.234 [2024-11-06 09:14:57.487548] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:21.234 [2024-11-06 09:14:57.487563] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:21.234 [2024-11-06 09:14:57.487883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:21.234 pt4 00:22:21.234 [2024-11-06 09:14:57.494343] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:21.234 [2024-11-06 09:14:57.494376] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:21.234 [2024-11-06 09:14:57.494604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.234 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:21.234 "name": "raid_bdev1", 00:22:21.234 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:21.234 "strip_size_kb": 64, 00:22:21.234 "state": "online", 00:22:21.234 "raid_level": "raid5f", 00:22:21.234 "superblock": true, 00:22:21.234 "num_base_bdevs": 4, 00:22:21.234 "num_base_bdevs_discovered": 4, 00:22:21.234 "num_base_bdevs_operational": 4, 00:22:21.234 "base_bdevs_list": [ 00:22:21.234 { 00:22:21.234 "name": "pt1", 00:22:21.234 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:21.234 "is_configured": true, 00:22:21.234 "data_offset": 2048, 00:22:21.234 "data_size": 63488 00:22:21.234 }, 00:22:21.234 { 00:22:21.234 "name": "pt2", 00:22:21.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:21.234 "is_configured": true, 00:22:21.234 "data_offset": 2048, 00:22:21.235 "data_size": 63488 00:22:21.235 }, 00:22:21.235 { 00:22:21.235 "name": "pt3", 00:22:21.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:21.235 "is_configured": true, 00:22:21.235 "data_offset": 2048, 00:22:21.235 "data_size": 63488 00:22:21.235 }, 00:22:21.235 { 00:22:21.235 "name": "pt4", 00:22:21.235 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:21.235 "is_configured": true, 00:22:21.235 "data_offset": 2048, 00:22:21.235 "data_size": 63488 00:22:21.235 } 00:22:21.235 ] 00:22:21.235 }' 00:22:21.235 09:14:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:21.235 09:14:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.493 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:21.493 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:21.493 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:21.493 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:21.493 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:22:21.493 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:21.493 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:21.493 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.493 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.493 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:21.493 [2024-11-06 09:14:58.014387] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:21.753 "name": "raid_bdev1", 00:22:21.753 "aliases": [ 00:22:21.753 "89059852-292f-414c-a496-bc9bcb8b3cf9" 00:22:21.753 ], 00:22:21.753 "product_name": "Raid Volume", 00:22:21.753 "block_size": 512, 00:22:21.753 "num_blocks": 190464, 00:22:21.753 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:21.753 "assigned_rate_limits": { 00:22:21.753 "rw_ios_per_sec": 0, 00:22:21.753 "rw_mbytes_per_sec": 0, 00:22:21.753 "r_mbytes_per_sec": 0, 00:22:21.753 "w_mbytes_per_sec": 0 00:22:21.753 }, 00:22:21.753 "claimed": false, 00:22:21.753 "zoned": false, 00:22:21.753 "supported_io_types": { 00:22:21.753 "read": true, 00:22:21.753 "write": true, 00:22:21.753 "unmap": false, 00:22:21.753 "flush": false, 00:22:21.753 "reset": true, 00:22:21.753 "nvme_admin": false, 00:22:21.753 "nvme_io": false, 00:22:21.753 "nvme_io_md": false, 00:22:21.753 "write_zeroes": true, 00:22:21.753 "zcopy": false, 00:22:21.753 "get_zone_info": false, 00:22:21.753 "zone_management": false, 00:22:21.753 "zone_append": false, 00:22:21.753 "compare": false, 00:22:21.753 "compare_and_write": false, 00:22:21.753 "abort": false, 00:22:21.753 "seek_hole": false, 00:22:21.753 "seek_data": false, 00:22:21.753 "copy": false, 00:22:21.753 "nvme_iov_md": false 00:22:21.753 }, 00:22:21.753 "driver_specific": { 00:22:21.753 "raid": { 00:22:21.753 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:21.753 "strip_size_kb": 64, 00:22:21.753 "state": "online", 00:22:21.753 "raid_level": "raid5f", 00:22:21.753 "superblock": true, 00:22:21.753 "num_base_bdevs": 4, 00:22:21.753 "num_base_bdevs_discovered": 4, 00:22:21.753 "num_base_bdevs_operational": 4, 00:22:21.753 "base_bdevs_list": [ 00:22:21.753 { 00:22:21.753 "name": "pt1", 00:22:21.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:21.753 "is_configured": true, 00:22:21.753 "data_offset": 2048, 00:22:21.753 "data_size": 63488 00:22:21.753 }, 00:22:21.753 { 00:22:21.753 "name": "pt2", 00:22:21.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:21.753 "is_configured": true, 00:22:21.753 "data_offset": 2048, 00:22:21.753 "data_size": 63488 00:22:21.753 }, 00:22:21.753 { 00:22:21.753 "name": "pt3", 00:22:21.753 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:21.753 "is_configured": true, 00:22:21.753 "data_offset": 2048, 00:22:21.753 "data_size": 63488 00:22:21.753 }, 00:22:21.753 { 00:22:21.753 "name": "pt4", 00:22:21.753 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:21.753 "is_configured": true, 00:22:21.753 "data_offset": 2048, 00:22:21.753 "data_size": 63488 00:22:21.753 } 00:22:21.753 ] 00:22:21.753 } 00:22:21.753 } 00:22:21.753 }' 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:21.753 pt2 00:22:21.753 pt3 00:22:21.753 pt4' 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.753 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.754 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:21.754 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:21.754 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:21.754 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:21.754 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:22:21.754 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:21.754 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.754 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.012 [2024-11-06 09:14:58.390421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 89059852-292f-414c-a496-bc9bcb8b3cf9 '!=' 89059852-292f-414c-a496-bc9bcb8b3cf9 ']' 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.012 [2024-11-06 09:14:58.458299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.012 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.012 "name": "raid_bdev1", 00:22:22.012 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:22.012 "strip_size_kb": 64, 00:22:22.012 "state": "online", 00:22:22.012 "raid_level": "raid5f", 00:22:22.012 "superblock": true, 00:22:22.012 "num_base_bdevs": 4, 00:22:22.012 "num_base_bdevs_discovered": 3, 00:22:22.012 "num_base_bdevs_operational": 3, 00:22:22.012 "base_bdevs_list": [ 00:22:22.012 { 00:22:22.012 "name": null, 00:22:22.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.012 "is_configured": false, 00:22:22.012 "data_offset": 0, 00:22:22.012 "data_size": 63488 00:22:22.013 }, 00:22:22.013 { 00:22:22.013 "name": "pt2", 00:22:22.013 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:22.013 "is_configured": true, 00:22:22.013 "data_offset": 2048, 00:22:22.013 "data_size": 63488 00:22:22.013 }, 00:22:22.013 { 00:22:22.013 "name": "pt3", 00:22:22.013 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:22.013 "is_configured": true, 00:22:22.013 "data_offset": 2048, 00:22:22.013 "data_size": 63488 00:22:22.013 }, 00:22:22.013 { 00:22:22.013 "name": "pt4", 00:22:22.013 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:22.013 "is_configured": true, 00:22:22.013 "data_offset": 2048, 00:22:22.013 "data_size": 63488 00:22:22.013 } 00:22:22.013 ] 00:22:22.013 }' 00:22:22.013 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.013 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.577 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:22.577 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.577 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.577 [2024-11-06 09:14:58.978386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:22.577 [2024-11-06 09:14:58.978425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:22.577 [2024-11-06 09:14:58.978521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.577 [2024-11-06 09:14:58.978652] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.577 [2024-11-06 09:14:58.978670] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:22.577 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.577 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:22.577 09:14:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.577 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.577 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.577 09:14:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.577 [2024-11-06 09:14:59.062403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:22.577 [2024-11-06 09:14:59.062610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.577 [2024-11-06 09:14:59.062652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:22.577 [2024-11-06 09:14:59.062669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.577 [2024-11-06 09:14:59.065491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.577 [2024-11-06 09:14:59.065535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:22.577 [2024-11-06 09:14:59.065640] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:22.577 [2024-11-06 09:14:59.065699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:22.577 pt2 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.577 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.835 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.835 "name": "raid_bdev1", 00:22:22.835 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:22.835 "strip_size_kb": 64, 00:22:22.835 "state": "configuring", 00:22:22.835 "raid_level": "raid5f", 00:22:22.835 "superblock": true, 00:22:22.835 "num_base_bdevs": 4, 00:22:22.835 "num_base_bdevs_discovered": 1, 00:22:22.835 "num_base_bdevs_operational": 3, 00:22:22.835 "base_bdevs_list": [ 00:22:22.835 { 00:22:22.835 "name": null, 00:22:22.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.835 "is_configured": false, 00:22:22.835 "data_offset": 2048, 00:22:22.835 "data_size": 63488 00:22:22.835 }, 00:22:22.835 { 00:22:22.835 "name": "pt2", 00:22:22.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:22.835 "is_configured": true, 00:22:22.835 "data_offset": 2048, 00:22:22.835 "data_size": 63488 00:22:22.835 }, 00:22:22.835 { 00:22:22.835 "name": null, 00:22:22.835 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:22.835 "is_configured": false, 00:22:22.835 "data_offset": 2048, 00:22:22.835 "data_size": 63488 00:22:22.835 }, 00:22:22.835 { 00:22:22.835 "name": null, 00:22:22.835 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:22.835 "is_configured": false, 00:22:22.835 "data_offset": 2048, 00:22:22.835 "data_size": 63488 00:22:22.835 } 00:22:22.835 ] 00:22:22.835 }' 00:22:22.835 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.835 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.094 [2024-11-06 09:14:59.578581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:23.094 [2024-11-06 09:14:59.578793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.094 [2024-11-06 09:14:59.578952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:23.094 [2024-11-06 09:14:59.579069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.094 [2024-11-06 09:14:59.579625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.094 [2024-11-06 09:14:59.579650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:23.094 [2024-11-06 09:14:59.579757] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:23.094 [2024-11-06 09:14:59.579796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:23.094 pt3 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.094 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.352 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.352 "name": "raid_bdev1", 00:22:23.352 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:23.352 "strip_size_kb": 64, 00:22:23.352 "state": "configuring", 00:22:23.352 "raid_level": "raid5f", 00:22:23.352 "superblock": true, 00:22:23.352 "num_base_bdevs": 4, 00:22:23.352 "num_base_bdevs_discovered": 2, 00:22:23.352 "num_base_bdevs_operational": 3, 00:22:23.352 "base_bdevs_list": [ 00:22:23.352 { 00:22:23.352 "name": null, 00:22:23.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.352 "is_configured": false, 00:22:23.352 "data_offset": 2048, 00:22:23.352 "data_size": 63488 00:22:23.352 }, 00:22:23.352 { 00:22:23.352 "name": "pt2", 00:22:23.352 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:23.352 "is_configured": true, 00:22:23.352 "data_offset": 2048, 00:22:23.352 "data_size": 63488 00:22:23.352 }, 00:22:23.352 { 00:22:23.352 "name": "pt3", 00:22:23.352 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:23.352 "is_configured": true, 00:22:23.352 "data_offset": 2048, 00:22:23.352 "data_size": 63488 00:22:23.352 }, 00:22:23.352 { 00:22:23.352 "name": null, 00:22:23.352 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:23.352 "is_configured": false, 00:22:23.352 "data_offset": 2048, 00:22:23.352 "data_size": 63488 00:22:23.352 } 00:22:23.352 ] 00:22:23.352 }' 00:22:23.352 09:14:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.352 09:14:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.624 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:22:23.624 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:23.624 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:22:23.624 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:23.624 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.624 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.624 [2024-11-06 09:15:00.098738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:23.624 [2024-11-06 09:15:00.098955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.624 [2024-11-06 09:15:00.099126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:23.625 [2024-11-06 09:15:00.099272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.625 [2024-11-06 09:15:00.099860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.625 [2024-11-06 09:15:00.099896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:23.625 [2024-11-06 09:15:00.100001] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:23.625 [2024-11-06 09:15:00.100033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:23.625 [2024-11-06 09:15:00.100206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:23.625 [2024-11-06 09:15:00.100221] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:23.625 [2024-11-06 09:15:00.100542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:23.625 pt4 00:22:23.625 [2024-11-06 09:15:00.107103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:23.625 [2024-11-06 09:15:00.107135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:23.625 [2024-11-06 09:15:00.107473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.625 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:23.883 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:23.883 "name": "raid_bdev1", 00:22:23.883 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:23.883 "strip_size_kb": 64, 00:22:23.883 "state": "online", 00:22:23.883 "raid_level": "raid5f", 00:22:23.883 "superblock": true, 00:22:23.883 "num_base_bdevs": 4, 00:22:23.883 "num_base_bdevs_discovered": 3, 00:22:23.883 "num_base_bdevs_operational": 3, 00:22:23.883 "base_bdevs_list": [ 00:22:23.883 { 00:22:23.883 "name": null, 00:22:23.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.883 "is_configured": false, 00:22:23.883 "data_offset": 2048, 00:22:23.883 "data_size": 63488 00:22:23.883 }, 00:22:23.883 { 00:22:23.883 "name": "pt2", 00:22:23.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:23.883 "is_configured": true, 00:22:23.883 "data_offset": 2048, 00:22:23.883 "data_size": 63488 00:22:23.883 }, 00:22:23.883 { 00:22:23.883 "name": "pt3", 00:22:23.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:23.883 "is_configured": true, 00:22:23.883 "data_offset": 2048, 00:22:23.883 "data_size": 63488 00:22:23.883 }, 00:22:23.883 { 00:22:23.883 "name": "pt4", 00:22:23.883 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:23.883 "is_configured": true, 00:22:23.883 "data_offset": 2048, 00:22:23.883 "data_size": 63488 00:22:23.883 } 00:22:23.883 ] 00:22:23.883 }' 00:22:23.883 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:23.883 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.143 [2024-11-06 09:15:00.611105] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:24.143 [2024-11-06 09:15:00.611271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:24.143 [2024-11-06 09:15:00.611403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:24.143 [2024-11-06 09:15:00.611499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:24.143 [2024-11-06 09:15:00.611521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.143 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.403 [2024-11-06 09:15:00.687106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:24.403 [2024-11-06 09:15:00.687180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.403 [2024-11-06 09:15:00.687216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:24.403 [2024-11-06 09:15:00.687234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.403 [2024-11-06 09:15:00.690160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.403 [2024-11-06 09:15:00.690337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:24.403 [2024-11-06 09:15:00.690455] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:24.403 [2024-11-06 09:15:00.690527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:24.403 [2024-11-06 09:15:00.690710] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:24.403 [2024-11-06 09:15:00.690735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:24.403 [2024-11-06 09:15:00.690755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:24.403 [2024-11-06 09:15:00.690854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:24.403 [2024-11-06 09:15:00.691004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:24.403 pt1 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.403 "name": "raid_bdev1", 00:22:24.403 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:24.403 "strip_size_kb": 64, 00:22:24.403 "state": "configuring", 00:22:24.403 "raid_level": "raid5f", 00:22:24.403 "superblock": true, 00:22:24.403 "num_base_bdevs": 4, 00:22:24.403 "num_base_bdevs_discovered": 2, 00:22:24.403 "num_base_bdevs_operational": 3, 00:22:24.403 "base_bdevs_list": [ 00:22:24.403 { 00:22:24.403 "name": null, 00:22:24.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.403 "is_configured": false, 00:22:24.403 "data_offset": 2048, 00:22:24.403 "data_size": 63488 00:22:24.403 }, 00:22:24.403 { 00:22:24.403 "name": "pt2", 00:22:24.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:24.403 "is_configured": true, 00:22:24.403 "data_offset": 2048, 00:22:24.403 "data_size": 63488 00:22:24.403 }, 00:22:24.403 { 00:22:24.403 "name": "pt3", 00:22:24.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:24.403 "is_configured": true, 00:22:24.403 "data_offset": 2048, 00:22:24.403 "data_size": 63488 00:22:24.403 }, 00:22:24.403 { 00:22:24.403 "name": null, 00:22:24.403 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:24.403 "is_configured": false, 00:22:24.403 "data_offset": 2048, 00:22:24.403 "data_size": 63488 00:22:24.403 } 00:22:24.403 ] 00:22:24.403 }' 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.403 09:15:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.972 [2024-11-06 09:15:01.275399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:24.972 [2024-11-06 09:15:01.275604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.972 [2024-11-06 09:15:01.275653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:24.972 [2024-11-06 09:15:01.275670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.972 [2024-11-06 09:15:01.276219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.972 [2024-11-06 09:15:01.276251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:24.972 [2024-11-06 09:15:01.276360] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:24.972 [2024-11-06 09:15:01.276412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:24.972 [2024-11-06 09:15:01.276631] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:24.972 [2024-11-06 09:15:01.276658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:24.972 [2024-11-06 09:15:01.277003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:24.972 pt4 00:22:24.972 [2024-11-06 09:15:01.283698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:24.972 [2024-11-06 09:15:01.283738] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:24.972 [2024-11-06 09:15:01.284078] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.972 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.972 "name": "raid_bdev1", 00:22:24.972 "uuid": "89059852-292f-414c-a496-bc9bcb8b3cf9", 00:22:24.972 "strip_size_kb": 64, 00:22:24.972 "state": "online", 00:22:24.972 "raid_level": "raid5f", 00:22:24.972 "superblock": true, 00:22:24.972 "num_base_bdevs": 4, 00:22:24.972 "num_base_bdevs_discovered": 3, 00:22:24.972 "num_base_bdevs_operational": 3, 00:22:24.972 "base_bdevs_list": [ 00:22:24.972 { 00:22:24.972 "name": null, 00:22:24.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.972 "is_configured": false, 00:22:24.972 "data_offset": 2048, 00:22:24.972 "data_size": 63488 00:22:24.972 }, 00:22:24.972 { 00:22:24.972 "name": "pt2", 00:22:24.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:24.972 "is_configured": true, 00:22:24.972 "data_offset": 2048, 00:22:24.973 "data_size": 63488 00:22:24.973 }, 00:22:24.973 { 00:22:24.973 "name": "pt3", 00:22:24.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:24.973 "is_configured": true, 00:22:24.973 "data_offset": 2048, 00:22:24.973 "data_size": 63488 00:22:24.973 }, 00:22:24.973 { 00:22:24.973 "name": "pt4", 00:22:24.973 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:24.973 "is_configured": true, 00:22:24.973 "data_offset": 2048, 00:22:24.973 "data_size": 63488 00:22:24.973 } 00:22:24.973 ] 00:22:24.973 }' 00:22:24.973 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.973 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.540 [2024-11-06 09:15:01.855946] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 89059852-292f-414c-a496-bc9bcb8b3cf9 '!=' 89059852-292f-414c-a496-bc9bcb8b3cf9 ']' 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84424 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84424 ']' 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84424 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84424 00:22:25.540 killing process with pid 84424 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84424' 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 84424 00:22:25.540 [2024-11-06 09:15:01.931275] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:25.540 09:15:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 84424 00:22:25.540 [2024-11-06 09:15:01.931390] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:25.540 [2024-11-06 09:15:01.931489] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:25.540 [2024-11-06 09:15:01.931510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:25.799 [2024-11-06 09:15:02.287170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:27.209 ************************************ 00:22:27.209 END TEST raid5f_superblock_test 00:22:27.209 ************************************ 00:22:27.209 09:15:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:22:27.209 00:22:27.209 real 0m9.360s 00:22:27.209 user 0m15.420s 00:22:27.209 sys 0m1.264s 00:22:27.209 09:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:27.209 09:15:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.209 09:15:03 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:22:27.209 09:15:03 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:22:27.209 09:15:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:22:27.209 09:15:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:27.209 09:15:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:27.209 ************************************ 00:22:27.209 START TEST raid5f_rebuild_test 00:22:27.209 ************************************ 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84919 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84919 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 84919 ']' 00:22:27.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:27.209 09:15:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.209 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:27.209 Zero copy mechanism will not be used. 00:22:27.209 [2024-11-06 09:15:03.508664] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:22:27.209 [2024-11-06 09:15:03.508933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84919 ] 00:22:27.209 [2024-11-06 09:15:03.701634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.469 [2024-11-06 09:15:03.865024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:27.727 [2024-11-06 09:15:04.113573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:27.727 [2024-11-06 09:15:04.113638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:28.292 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.293 BaseBdev1_malloc 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.293 [2024-11-06 09:15:04.580058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:28.293 [2024-11-06 09:15:04.580377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.293 [2024-11-06 09:15:04.580436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:28.293 [2024-11-06 09:15:04.580457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.293 [2024-11-06 09:15:04.583317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.293 BaseBdev1 00:22:28.293 [2024-11-06 09:15:04.583488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.293 BaseBdev2_malloc 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.293 [2024-11-06 09:15:04.632336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:28.293 [2024-11-06 09:15:04.632541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.293 [2024-11-06 09:15:04.632611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:28.293 [2024-11-06 09:15:04.632786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.293 [2024-11-06 09:15:04.635615] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.293 BaseBdev2 00:22:28.293 [2024-11-06 09:15:04.635787] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.293 BaseBdev3_malloc 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.293 [2024-11-06 09:15:04.693835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:28.293 [2024-11-06 09:15:04.694023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.293 [2024-11-06 09:15:04.694097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:28.293 [2024-11-06 09:15:04.694218] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.293 [2024-11-06 09:15:04.696974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.293 [2024-11-06 09:15:04.697134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:28.293 BaseBdev3 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.293 BaseBdev4_malloc 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.293 [2024-11-06 09:15:04.749896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:28.293 [2024-11-06 09:15:04.750086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.293 [2024-11-06 09:15:04.750158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:28.293 [2024-11-06 09:15:04.750268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.293 [2024-11-06 09:15:04.753072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.293 BaseBdev4 00:22:28.293 [2024-11-06 09:15:04.753243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.293 spare_malloc 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.293 spare_delay 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:28.293 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.294 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.294 [2024-11-06 09:15:04.810110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:28.294 [2024-11-06 09:15:04.810302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.294 [2024-11-06 09:15:04.810378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:28.294 [2024-11-06 09:15:04.810411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.294 [2024-11-06 09:15:04.813296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.294 [2024-11-06 09:15:04.813445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:28.294 spare 00:22:28.294 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.294 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:22:28.294 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.294 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.294 [2024-11-06 09:15:04.822217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:28.294 [2024-11-06 09:15:04.824707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:28.294 [2024-11-06 09:15:04.824930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:28.294 [2024-11-06 09:15:04.825125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:28.294 [2024-11-06 09:15:04.825372] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:28.294 [2024-11-06 09:15:04.825490] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:22:28.294 [2024-11-06 09:15:04.825874] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:28.555 [2024-11-06 09:15:04.833001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:28.555 [2024-11-06 09:15:04.833138] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:28.555 [2024-11-06 09:15:04.833516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.555 "name": "raid_bdev1", 00:22:28.555 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:28.555 "strip_size_kb": 64, 00:22:28.555 "state": "online", 00:22:28.555 "raid_level": "raid5f", 00:22:28.555 "superblock": false, 00:22:28.555 "num_base_bdevs": 4, 00:22:28.555 "num_base_bdevs_discovered": 4, 00:22:28.555 "num_base_bdevs_operational": 4, 00:22:28.555 "base_bdevs_list": [ 00:22:28.555 { 00:22:28.555 "name": "BaseBdev1", 00:22:28.555 "uuid": "da8b644b-a8b0-5bdf-8c72-8e32f7af6c22", 00:22:28.555 "is_configured": true, 00:22:28.555 "data_offset": 0, 00:22:28.555 "data_size": 65536 00:22:28.555 }, 00:22:28.555 { 00:22:28.555 "name": "BaseBdev2", 00:22:28.555 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:28.555 "is_configured": true, 00:22:28.555 "data_offset": 0, 00:22:28.555 "data_size": 65536 00:22:28.555 }, 00:22:28.555 { 00:22:28.555 "name": "BaseBdev3", 00:22:28.555 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:28.555 "is_configured": true, 00:22:28.555 "data_offset": 0, 00:22:28.555 "data_size": 65536 00:22:28.555 }, 00:22:28.555 { 00:22:28.555 "name": "BaseBdev4", 00:22:28.555 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:28.555 "is_configured": true, 00:22:28.555 "data_offset": 0, 00:22:28.555 "data_size": 65536 00:22:28.555 } 00:22:28.555 ] 00:22:28.555 }' 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.555 09:15:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:28.820 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:28.820 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.820 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:29.078 [2024-11-06 09:15:05.361445] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:29.078 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:29.337 [2024-11-06 09:15:05.745345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:29.337 /dev/nbd0 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:29.337 1+0 records in 00:22:29.337 1+0 records out 00:22:29.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361499 s, 11.3 MB/s 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:22:29.337 09:15:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:22:30.272 512+0 records in 00:22:30.272 512+0 records out 00:22:30.272 100663296 bytes (101 MB, 96 MiB) copied, 0.619867 s, 162 MB/s 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:30.272 [2024-11-06 09:15:06.741727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.272 [2024-11-06 09:15:06.757224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:30.272 09:15:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.273 09:15:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.531 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:30.531 "name": "raid_bdev1", 00:22:30.531 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:30.531 "strip_size_kb": 64, 00:22:30.531 "state": "online", 00:22:30.531 "raid_level": "raid5f", 00:22:30.531 "superblock": false, 00:22:30.531 "num_base_bdevs": 4, 00:22:30.531 "num_base_bdevs_discovered": 3, 00:22:30.531 "num_base_bdevs_operational": 3, 00:22:30.531 "base_bdevs_list": [ 00:22:30.531 { 00:22:30.531 "name": null, 00:22:30.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.531 "is_configured": false, 00:22:30.531 "data_offset": 0, 00:22:30.531 "data_size": 65536 00:22:30.531 }, 00:22:30.531 { 00:22:30.531 "name": "BaseBdev2", 00:22:30.531 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:30.531 "is_configured": true, 00:22:30.531 "data_offset": 0, 00:22:30.531 "data_size": 65536 00:22:30.531 }, 00:22:30.531 { 00:22:30.531 "name": "BaseBdev3", 00:22:30.531 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:30.531 "is_configured": true, 00:22:30.531 "data_offset": 0, 00:22:30.531 "data_size": 65536 00:22:30.531 }, 00:22:30.531 { 00:22:30.531 "name": "BaseBdev4", 00:22:30.531 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:30.531 "is_configured": true, 00:22:30.531 "data_offset": 0, 00:22:30.531 "data_size": 65536 00:22:30.531 } 00:22:30.531 ] 00:22:30.531 }' 00:22:30.531 09:15:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:30.531 09:15:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.789 09:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:30.789 09:15:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.789 09:15:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.789 [2024-11-06 09:15:07.265387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:30.789 [2024-11-06 09:15:07.279647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:22:30.789 09:15:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.789 09:15:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:30.789 [2024-11-06 09:15:07.288687] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:32.187 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.187 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:32.188 "name": "raid_bdev1", 00:22:32.188 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:32.188 "strip_size_kb": 64, 00:22:32.188 "state": "online", 00:22:32.188 "raid_level": "raid5f", 00:22:32.188 "superblock": false, 00:22:32.188 "num_base_bdevs": 4, 00:22:32.188 "num_base_bdevs_discovered": 4, 00:22:32.188 "num_base_bdevs_operational": 4, 00:22:32.188 "process": { 00:22:32.188 "type": "rebuild", 00:22:32.188 "target": "spare", 00:22:32.188 "progress": { 00:22:32.188 "blocks": 17280, 00:22:32.188 "percent": 8 00:22:32.188 } 00:22:32.188 }, 00:22:32.188 "base_bdevs_list": [ 00:22:32.188 { 00:22:32.188 "name": "spare", 00:22:32.188 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:32.188 "is_configured": true, 00:22:32.188 "data_offset": 0, 00:22:32.188 "data_size": 65536 00:22:32.188 }, 00:22:32.188 { 00:22:32.188 "name": "BaseBdev2", 00:22:32.188 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:32.188 "is_configured": true, 00:22:32.188 "data_offset": 0, 00:22:32.188 "data_size": 65536 00:22:32.188 }, 00:22:32.188 { 00:22:32.188 "name": "BaseBdev3", 00:22:32.188 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:32.188 "is_configured": true, 00:22:32.188 "data_offset": 0, 00:22:32.188 "data_size": 65536 00:22:32.188 }, 00:22:32.188 { 00:22:32.188 "name": "BaseBdev4", 00:22:32.188 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:32.188 "is_configured": true, 00:22:32.188 "data_offset": 0, 00:22:32.188 "data_size": 65536 00:22:32.188 } 00:22:32.188 ] 00:22:32.188 }' 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.188 [2024-11-06 09:15:08.466725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:32.188 [2024-11-06 09:15:08.500707] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:32.188 [2024-11-06 09:15:08.500836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.188 [2024-11-06 09:15:08.500868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:32.188 [2024-11-06 09:15:08.500895] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:32.188 "name": "raid_bdev1", 00:22:32.188 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:32.188 "strip_size_kb": 64, 00:22:32.188 "state": "online", 00:22:32.188 "raid_level": "raid5f", 00:22:32.188 "superblock": false, 00:22:32.188 "num_base_bdevs": 4, 00:22:32.188 "num_base_bdevs_discovered": 3, 00:22:32.188 "num_base_bdevs_operational": 3, 00:22:32.188 "base_bdevs_list": [ 00:22:32.188 { 00:22:32.188 "name": null, 00:22:32.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.188 "is_configured": false, 00:22:32.188 "data_offset": 0, 00:22:32.188 "data_size": 65536 00:22:32.188 }, 00:22:32.188 { 00:22:32.188 "name": "BaseBdev2", 00:22:32.188 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:32.188 "is_configured": true, 00:22:32.188 "data_offset": 0, 00:22:32.188 "data_size": 65536 00:22:32.188 }, 00:22:32.188 { 00:22:32.188 "name": "BaseBdev3", 00:22:32.188 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:32.188 "is_configured": true, 00:22:32.188 "data_offset": 0, 00:22:32.188 "data_size": 65536 00:22:32.188 }, 00:22:32.188 { 00:22:32.188 "name": "BaseBdev4", 00:22:32.188 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:32.188 "is_configured": true, 00:22:32.188 "data_offset": 0, 00:22:32.188 "data_size": 65536 00:22:32.188 } 00:22:32.188 ] 00:22:32.188 }' 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:32.188 09:15:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.758 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:32.758 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:32.758 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:32.758 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:32.758 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:32.758 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:32.758 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.758 09:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.758 09:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.758 09:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.758 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:32.758 "name": "raid_bdev1", 00:22:32.758 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:32.758 "strip_size_kb": 64, 00:22:32.758 "state": "online", 00:22:32.758 "raid_level": "raid5f", 00:22:32.758 "superblock": false, 00:22:32.758 "num_base_bdevs": 4, 00:22:32.758 "num_base_bdevs_discovered": 3, 00:22:32.758 "num_base_bdevs_operational": 3, 00:22:32.758 "base_bdevs_list": [ 00:22:32.758 { 00:22:32.758 "name": null, 00:22:32.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.758 "is_configured": false, 00:22:32.758 "data_offset": 0, 00:22:32.758 "data_size": 65536 00:22:32.758 }, 00:22:32.758 { 00:22:32.758 "name": "BaseBdev2", 00:22:32.758 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:32.758 "is_configured": true, 00:22:32.758 "data_offset": 0, 00:22:32.758 "data_size": 65536 00:22:32.758 }, 00:22:32.758 { 00:22:32.758 "name": "BaseBdev3", 00:22:32.758 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:32.758 "is_configured": true, 00:22:32.758 "data_offset": 0, 00:22:32.758 "data_size": 65536 00:22:32.758 }, 00:22:32.758 { 00:22:32.758 "name": "BaseBdev4", 00:22:32.758 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:32.758 "is_configured": true, 00:22:32.758 "data_offset": 0, 00:22:32.758 "data_size": 65536 00:22:32.758 } 00:22:32.758 ] 00:22:32.758 }' 00:22:32.759 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:32.759 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:32.759 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:32.759 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:32.759 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:32.759 09:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.759 09:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.759 [2024-11-06 09:15:09.225033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:32.759 [2024-11-06 09:15:09.238336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:22:32.759 09:15:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.759 09:15:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:32.759 [2024-11-06 09:15:09.247063] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:34.133 "name": "raid_bdev1", 00:22:34.133 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:34.133 "strip_size_kb": 64, 00:22:34.133 "state": "online", 00:22:34.133 "raid_level": "raid5f", 00:22:34.133 "superblock": false, 00:22:34.133 "num_base_bdevs": 4, 00:22:34.133 "num_base_bdevs_discovered": 4, 00:22:34.133 "num_base_bdevs_operational": 4, 00:22:34.133 "process": { 00:22:34.133 "type": "rebuild", 00:22:34.133 "target": "spare", 00:22:34.133 "progress": { 00:22:34.133 "blocks": 17280, 00:22:34.133 "percent": 8 00:22:34.133 } 00:22:34.133 }, 00:22:34.133 "base_bdevs_list": [ 00:22:34.133 { 00:22:34.133 "name": "spare", 00:22:34.133 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:34.133 "is_configured": true, 00:22:34.133 "data_offset": 0, 00:22:34.133 "data_size": 65536 00:22:34.133 }, 00:22:34.133 { 00:22:34.133 "name": "BaseBdev2", 00:22:34.133 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:34.133 "is_configured": true, 00:22:34.133 "data_offset": 0, 00:22:34.133 "data_size": 65536 00:22:34.133 }, 00:22:34.133 { 00:22:34.133 "name": "BaseBdev3", 00:22:34.133 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:34.133 "is_configured": true, 00:22:34.133 "data_offset": 0, 00:22:34.133 "data_size": 65536 00:22:34.133 }, 00:22:34.133 { 00:22:34.133 "name": "BaseBdev4", 00:22:34.133 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:34.133 "is_configured": true, 00:22:34.133 "data_offset": 0, 00:22:34.133 "data_size": 65536 00:22:34.133 } 00:22:34.133 ] 00:22:34.133 }' 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:34.133 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=670 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:34.134 "name": "raid_bdev1", 00:22:34.134 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:34.134 "strip_size_kb": 64, 00:22:34.134 "state": "online", 00:22:34.134 "raid_level": "raid5f", 00:22:34.134 "superblock": false, 00:22:34.134 "num_base_bdevs": 4, 00:22:34.134 "num_base_bdevs_discovered": 4, 00:22:34.134 "num_base_bdevs_operational": 4, 00:22:34.134 "process": { 00:22:34.134 "type": "rebuild", 00:22:34.134 "target": "spare", 00:22:34.134 "progress": { 00:22:34.134 "blocks": 21120, 00:22:34.134 "percent": 10 00:22:34.134 } 00:22:34.134 }, 00:22:34.134 "base_bdevs_list": [ 00:22:34.134 { 00:22:34.134 "name": "spare", 00:22:34.134 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:34.134 "is_configured": true, 00:22:34.134 "data_offset": 0, 00:22:34.134 "data_size": 65536 00:22:34.134 }, 00:22:34.134 { 00:22:34.134 "name": "BaseBdev2", 00:22:34.134 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:34.134 "is_configured": true, 00:22:34.134 "data_offset": 0, 00:22:34.134 "data_size": 65536 00:22:34.134 }, 00:22:34.134 { 00:22:34.134 "name": "BaseBdev3", 00:22:34.134 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:34.134 "is_configured": true, 00:22:34.134 "data_offset": 0, 00:22:34.134 "data_size": 65536 00:22:34.134 }, 00:22:34.134 { 00:22:34.134 "name": "BaseBdev4", 00:22:34.134 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:34.134 "is_configured": true, 00:22:34.134 "data_offset": 0, 00:22:34.134 "data_size": 65536 00:22:34.134 } 00:22:34.134 ] 00:22:34.134 }' 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:34.134 09:15:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:35.068 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:35.068 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:35.068 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:35.068 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:35.068 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:35.068 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:35.068 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.068 09:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.068 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.068 09:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.068 09:15:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.326 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:35.326 "name": "raid_bdev1", 00:22:35.326 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:35.326 "strip_size_kb": 64, 00:22:35.326 "state": "online", 00:22:35.326 "raid_level": "raid5f", 00:22:35.326 "superblock": false, 00:22:35.326 "num_base_bdevs": 4, 00:22:35.326 "num_base_bdevs_discovered": 4, 00:22:35.326 "num_base_bdevs_operational": 4, 00:22:35.326 "process": { 00:22:35.326 "type": "rebuild", 00:22:35.326 "target": "spare", 00:22:35.326 "progress": { 00:22:35.326 "blocks": 44160, 00:22:35.326 "percent": 22 00:22:35.326 } 00:22:35.326 }, 00:22:35.326 "base_bdevs_list": [ 00:22:35.326 { 00:22:35.326 "name": "spare", 00:22:35.326 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:35.326 "is_configured": true, 00:22:35.326 "data_offset": 0, 00:22:35.326 "data_size": 65536 00:22:35.326 }, 00:22:35.326 { 00:22:35.326 "name": "BaseBdev2", 00:22:35.326 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:35.326 "is_configured": true, 00:22:35.326 "data_offset": 0, 00:22:35.326 "data_size": 65536 00:22:35.326 }, 00:22:35.326 { 00:22:35.326 "name": "BaseBdev3", 00:22:35.326 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:35.326 "is_configured": true, 00:22:35.326 "data_offset": 0, 00:22:35.326 "data_size": 65536 00:22:35.326 }, 00:22:35.326 { 00:22:35.326 "name": "BaseBdev4", 00:22:35.326 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:35.326 "is_configured": true, 00:22:35.326 "data_offset": 0, 00:22:35.326 "data_size": 65536 00:22:35.326 } 00:22:35.326 ] 00:22:35.326 }' 00:22:35.326 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:35.326 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:35.326 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:35.326 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:35.326 09:15:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:36.259 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:36.259 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:36.259 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:36.259 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:36.259 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:36.259 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:36.259 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.259 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.259 09:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.259 09:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.259 09:15:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.259 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:36.259 "name": "raid_bdev1", 00:22:36.259 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:36.259 "strip_size_kb": 64, 00:22:36.259 "state": "online", 00:22:36.259 "raid_level": "raid5f", 00:22:36.259 "superblock": false, 00:22:36.259 "num_base_bdevs": 4, 00:22:36.259 "num_base_bdevs_discovered": 4, 00:22:36.259 "num_base_bdevs_operational": 4, 00:22:36.259 "process": { 00:22:36.259 "type": "rebuild", 00:22:36.259 "target": "spare", 00:22:36.260 "progress": { 00:22:36.260 "blocks": 65280, 00:22:36.260 "percent": 33 00:22:36.260 } 00:22:36.260 }, 00:22:36.260 "base_bdevs_list": [ 00:22:36.260 { 00:22:36.260 "name": "spare", 00:22:36.260 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:36.260 "is_configured": true, 00:22:36.260 "data_offset": 0, 00:22:36.260 "data_size": 65536 00:22:36.260 }, 00:22:36.260 { 00:22:36.260 "name": "BaseBdev2", 00:22:36.260 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:36.260 "is_configured": true, 00:22:36.260 "data_offset": 0, 00:22:36.260 "data_size": 65536 00:22:36.260 }, 00:22:36.260 { 00:22:36.260 "name": "BaseBdev3", 00:22:36.260 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:36.260 "is_configured": true, 00:22:36.260 "data_offset": 0, 00:22:36.260 "data_size": 65536 00:22:36.260 }, 00:22:36.260 { 00:22:36.260 "name": "BaseBdev4", 00:22:36.260 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:36.260 "is_configured": true, 00:22:36.260 "data_offset": 0, 00:22:36.260 "data_size": 65536 00:22:36.260 } 00:22:36.260 ] 00:22:36.260 }' 00:22:36.260 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:36.519 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:36.519 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:36.519 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:36.519 09:15:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:37.453 09:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:37.453 09:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:37.453 09:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:37.453 09:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:37.453 09:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:37.453 09:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:37.453 09:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.453 09:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.453 09:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.454 09:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:37.454 09:15:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.454 09:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:37.454 "name": "raid_bdev1", 00:22:37.454 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:37.454 "strip_size_kb": 64, 00:22:37.454 "state": "online", 00:22:37.454 "raid_level": "raid5f", 00:22:37.454 "superblock": false, 00:22:37.454 "num_base_bdevs": 4, 00:22:37.454 "num_base_bdevs_discovered": 4, 00:22:37.454 "num_base_bdevs_operational": 4, 00:22:37.454 "process": { 00:22:37.454 "type": "rebuild", 00:22:37.454 "target": "spare", 00:22:37.454 "progress": { 00:22:37.454 "blocks": 88320, 00:22:37.454 "percent": 44 00:22:37.454 } 00:22:37.454 }, 00:22:37.454 "base_bdevs_list": [ 00:22:37.454 { 00:22:37.454 "name": "spare", 00:22:37.454 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:37.454 "is_configured": true, 00:22:37.454 "data_offset": 0, 00:22:37.454 "data_size": 65536 00:22:37.454 }, 00:22:37.454 { 00:22:37.454 "name": "BaseBdev2", 00:22:37.454 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:37.454 "is_configured": true, 00:22:37.454 "data_offset": 0, 00:22:37.454 "data_size": 65536 00:22:37.454 }, 00:22:37.454 { 00:22:37.454 "name": "BaseBdev3", 00:22:37.454 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:37.454 "is_configured": true, 00:22:37.454 "data_offset": 0, 00:22:37.454 "data_size": 65536 00:22:37.454 }, 00:22:37.454 { 00:22:37.454 "name": "BaseBdev4", 00:22:37.454 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:37.454 "is_configured": true, 00:22:37.454 "data_offset": 0, 00:22:37.454 "data_size": 65536 00:22:37.454 } 00:22:37.454 ] 00:22:37.454 }' 00:22:37.454 09:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:37.711 09:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:37.711 09:15:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:37.711 09:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:37.711 09:15:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:38.645 "name": "raid_bdev1", 00:22:38.645 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:38.645 "strip_size_kb": 64, 00:22:38.645 "state": "online", 00:22:38.645 "raid_level": "raid5f", 00:22:38.645 "superblock": false, 00:22:38.645 "num_base_bdevs": 4, 00:22:38.645 "num_base_bdevs_discovered": 4, 00:22:38.645 "num_base_bdevs_operational": 4, 00:22:38.645 "process": { 00:22:38.645 "type": "rebuild", 00:22:38.645 "target": "spare", 00:22:38.645 "progress": { 00:22:38.645 "blocks": 109440, 00:22:38.645 "percent": 55 00:22:38.645 } 00:22:38.645 }, 00:22:38.645 "base_bdevs_list": [ 00:22:38.645 { 00:22:38.645 "name": "spare", 00:22:38.645 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:38.645 "is_configured": true, 00:22:38.645 "data_offset": 0, 00:22:38.645 "data_size": 65536 00:22:38.645 }, 00:22:38.645 { 00:22:38.645 "name": "BaseBdev2", 00:22:38.645 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:38.645 "is_configured": true, 00:22:38.645 "data_offset": 0, 00:22:38.645 "data_size": 65536 00:22:38.645 }, 00:22:38.645 { 00:22:38.645 "name": "BaseBdev3", 00:22:38.645 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:38.645 "is_configured": true, 00:22:38.645 "data_offset": 0, 00:22:38.645 "data_size": 65536 00:22:38.645 }, 00:22:38.645 { 00:22:38.645 "name": "BaseBdev4", 00:22:38.645 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:38.645 "is_configured": true, 00:22:38.645 "data_offset": 0, 00:22:38.645 "data_size": 65536 00:22:38.645 } 00:22:38.645 ] 00:22:38.645 }' 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:38.645 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:38.904 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:38.904 09:15:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:39.904 "name": "raid_bdev1", 00:22:39.904 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:39.904 "strip_size_kb": 64, 00:22:39.904 "state": "online", 00:22:39.904 "raid_level": "raid5f", 00:22:39.904 "superblock": false, 00:22:39.904 "num_base_bdevs": 4, 00:22:39.904 "num_base_bdevs_discovered": 4, 00:22:39.904 "num_base_bdevs_operational": 4, 00:22:39.904 "process": { 00:22:39.904 "type": "rebuild", 00:22:39.904 "target": "spare", 00:22:39.904 "progress": { 00:22:39.904 "blocks": 132480, 00:22:39.904 "percent": 67 00:22:39.904 } 00:22:39.904 }, 00:22:39.904 "base_bdevs_list": [ 00:22:39.904 { 00:22:39.904 "name": "spare", 00:22:39.904 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:39.904 "is_configured": true, 00:22:39.904 "data_offset": 0, 00:22:39.904 "data_size": 65536 00:22:39.904 }, 00:22:39.904 { 00:22:39.904 "name": "BaseBdev2", 00:22:39.904 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:39.904 "is_configured": true, 00:22:39.904 "data_offset": 0, 00:22:39.904 "data_size": 65536 00:22:39.904 }, 00:22:39.904 { 00:22:39.904 "name": "BaseBdev3", 00:22:39.904 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:39.904 "is_configured": true, 00:22:39.904 "data_offset": 0, 00:22:39.904 "data_size": 65536 00:22:39.904 }, 00:22:39.904 { 00:22:39.904 "name": "BaseBdev4", 00:22:39.904 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:39.904 "is_configured": true, 00:22:39.904 "data_offset": 0, 00:22:39.904 "data_size": 65536 00:22:39.904 } 00:22:39.904 ] 00:22:39.904 }' 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:39.904 09:15:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:41.280 "name": "raid_bdev1", 00:22:41.280 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:41.280 "strip_size_kb": 64, 00:22:41.280 "state": "online", 00:22:41.280 "raid_level": "raid5f", 00:22:41.280 "superblock": false, 00:22:41.280 "num_base_bdevs": 4, 00:22:41.280 "num_base_bdevs_discovered": 4, 00:22:41.280 "num_base_bdevs_operational": 4, 00:22:41.280 "process": { 00:22:41.280 "type": "rebuild", 00:22:41.280 "target": "spare", 00:22:41.280 "progress": { 00:22:41.280 "blocks": 153600, 00:22:41.280 "percent": 78 00:22:41.280 } 00:22:41.280 }, 00:22:41.280 "base_bdevs_list": [ 00:22:41.280 { 00:22:41.280 "name": "spare", 00:22:41.280 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:41.280 "is_configured": true, 00:22:41.280 "data_offset": 0, 00:22:41.280 "data_size": 65536 00:22:41.280 }, 00:22:41.280 { 00:22:41.280 "name": "BaseBdev2", 00:22:41.280 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:41.280 "is_configured": true, 00:22:41.280 "data_offset": 0, 00:22:41.280 "data_size": 65536 00:22:41.280 }, 00:22:41.280 { 00:22:41.280 "name": "BaseBdev3", 00:22:41.280 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:41.280 "is_configured": true, 00:22:41.280 "data_offset": 0, 00:22:41.280 "data_size": 65536 00:22:41.280 }, 00:22:41.280 { 00:22:41.280 "name": "BaseBdev4", 00:22:41.280 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:41.280 "is_configured": true, 00:22:41.280 "data_offset": 0, 00:22:41.280 "data_size": 65536 00:22:41.280 } 00:22:41.280 ] 00:22:41.280 }' 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:41.280 09:15:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:42.223 "name": "raid_bdev1", 00:22:42.223 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:42.223 "strip_size_kb": 64, 00:22:42.223 "state": "online", 00:22:42.223 "raid_level": "raid5f", 00:22:42.223 "superblock": false, 00:22:42.223 "num_base_bdevs": 4, 00:22:42.223 "num_base_bdevs_discovered": 4, 00:22:42.223 "num_base_bdevs_operational": 4, 00:22:42.223 "process": { 00:22:42.223 "type": "rebuild", 00:22:42.223 "target": "spare", 00:22:42.223 "progress": { 00:22:42.223 "blocks": 176640, 00:22:42.223 "percent": 89 00:22:42.223 } 00:22:42.223 }, 00:22:42.223 "base_bdevs_list": [ 00:22:42.223 { 00:22:42.223 "name": "spare", 00:22:42.223 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:42.223 "is_configured": true, 00:22:42.223 "data_offset": 0, 00:22:42.223 "data_size": 65536 00:22:42.223 }, 00:22:42.223 { 00:22:42.223 "name": "BaseBdev2", 00:22:42.223 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:42.223 "is_configured": true, 00:22:42.223 "data_offset": 0, 00:22:42.223 "data_size": 65536 00:22:42.223 }, 00:22:42.223 { 00:22:42.223 "name": "BaseBdev3", 00:22:42.223 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:42.223 "is_configured": true, 00:22:42.223 "data_offset": 0, 00:22:42.223 "data_size": 65536 00:22:42.223 }, 00:22:42.223 { 00:22:42.223 "name": "BaseBdev4", 00:22:42.223 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:42.223 "is_configured": true, 00:22:42.223 "data_offset": 0, 00:22:42.223 "data_size": 65536 00:22:42.223 } 00:22:42.223 ] 00:22:42.223 }' 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:42.223 09:15:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:43.158 [2024-11-06 09:15:19.648272] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:43.158 [2024-11-06 09:15:19.648391] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:43.158 [2024-11-06 09:15:19.648461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.416 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:43.416 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:43.416 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:43.416 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:43.416 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:43.416 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:43.416 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.416 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.416 09:15:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.416 09:15:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.416 09:15:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.416 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:43.416 "name": "raid_bdev1", 00:22:43.416 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:43.416 "strip_size_kb": 64, 00:22:43.416 "state": "online", 00:22:43.416 "raid_level": "raid5f", 00:22:43.416 "superblock": false, 00:22:43.416 "num_base_bdevs": 4, 00:22:43.416 "num_base_bdevs_discovered": 4, 00:22:43.416 "num_base_bdevs_operational": 4, 00:22:43.416 "base_bdevs_list": [ 00:22:43.416 { 00:22:43.416 "name": "spare", 00:22:43.416 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:43.416 "is_configured": true, 00:22:43.416 "data_offset": 0, 00:22:43.416 "data_size": 65536 00:22:43.416 }, 00:22:43.416 { 00:22:43.416 "name": "BaseBdev2", 00:22:43.416 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:43.416 "is_configured": true, 00:22:43.416 "data_offset": 0, 00:22:43.416 "data_size": 65536 00:22:43.416 }, 00:22:43.416 { 00:22:43.416 "name": "BaseBdev3", 00:22:43.416 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:43.416 "is_configured": true, 00:22:43.417 "data_offset": 0, 00:22:43.417 "data_size": 65536 00:22:43.417 }, 00:22:43.417 { 00:22:43.417 "name": "BaseBdev4", 00:22:43.417 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:43.417 "is_configured": true, 00:22:43.417 "data_offset": 0, 00:22:43.417 "data_size": 65536 00:22:43.417 } 00:22:43.417 ] 00:22:43.417 }' 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:43.417 "name": "raid_bdev1", 00:22:43.417 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:43.417 "strip_size_kb": 64, 00:22:43.417 "state": "online", 00:22:43.417 "raid_level": "raid5f", 00:22:43.417 "superblock": false, 00:22:43.417 "num_base_bdevs": 4, 00:22:43.417 "num_base_bdevs_discovered": 4, 00:22:43.417 "num_base_bdevs_operational": 4, 00:22:43.417 "base_bdevs_list": [ 00:22:43.417 { 00:22:43.417 "name": "spare", 00:22:43.417 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:43.417 "is_configured": true, 00:22:43.417 "data_offset": 0, 00:22:43.417 "data_size": 65536 00:22:43.417 }, 00:22:43.417 { 00:22:43.417 "name": "BaseBdev2", 00:22:43.417 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:43.417 "is_configured": true, 00:22:43.417 "data_offset": 0, 00:22:43.417 "data_size": 65536 00:22:43.417 }, 00:22:43.417 { 00:22:43.417 "name": "BaseBdev3", 00:22:43.417 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:43.417 "is_configured": true, 00:22:43.417 "data_offset": 0, 00:22:43.417 "data_size": 65536 00:22:43.417 }, 00:22:43.417 { 00:22:43.417 "name": "BaseBdev4", 00:22:43.417 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:43.417 "is_configured": true, 00:22:43.417 "data_offset": 0, 00:22:43.417 "data_size": 65536 00:22:43.417 } 00:22:43.417 ] 00:22:43.417 }' 00:22:43.417 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:43.675 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:43.675 09:15:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.675 "name": "raid_bdev1", 00:22:43.675 "uuid": "dfd3928e-133f-4a30-ad46-71d52699f4e4", 00:22:43.675 "strip_size_kb": 64, 00:22:43.675 "state": "online", 00:22:43.675 "raid_level": "raid5f", 00:22:43.675 "superblock": false, 00:22:43.675 "num_base_bdevs": 4, 00:22:43.675 "num_base_bdevs_discovered": 4, 00:22:43.675 "num_base_bdevs_operational": 4, 00:22:43.675 "base_bdevs_list": [ 00:22:43.675 { 00:22:43.675 "name": "spare", 00:22:43.675 "uuid": "56fc7045-ce8c-5511-82c0-31b8ebfefa1a", 00:22:43.675 "is_configured": true, 00:22:43.675 "data_offset": 0, 00:22:43.675 "data_size": 65536 00:22:43.675 }, 00:22:43.675 { 00:22:43.675 "name": "BaseBdev2", 00:22:43.675 "uuid": "36fc4afb-48a1-594d-849e-fca1070bd292", 00:22:43.675 "is_configured": true, 00:22:43.675 "data_offset": 0, 00:22:43.675 "data_size": 65536 00:22:43.675 }, 00:22:43.675 { 00:22:43.675 "name": "BaseBdev3", 00:22:43.675 "uuid": "bfc056f3-dd81-5998-b15d-0ea6f6cb55b3", 00:22:43.675 "is_configured": true, 00:22:43.675 "data_offset": 0, 00:22:43.675 "data_size": 65536 00:22:43.675 }, 00:22:43.675 { 00:22:43.675 "name": "BaseBdev4", 00:22:43.675 "uuid": "5039d77e-404f-5a9f-8611-6df5cd78b792", 00:22:43.675 "is_configured": true, 00:22:43.675 "data_offset": 0, 00:22:43.675 "data_size": 65536 00:22:43.675 } 00:22:43.675 ] 00:22:43.675 }' 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.675 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.242 [2024-11-06 09:15:20.535688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:44.242 [2024-11-06 09:15:20.535755] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:44.242 [2024-11-06 09:15:20.535875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:44.242 [2024-11-06 09:15:20.536000] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:44.242 [2024-11-06 09:15:20.536017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:44.242 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:44.500 /dev/nbd0 00:22:44.500 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:44.500 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:44.500 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:44.500 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:44.500 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:44.500 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:44.500 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:44.500 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:44.500 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:44.501 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:44.501 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:44.501 1+0 records in 00:22:44.501 1+0 records out 00:22:44.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319137 s, 12.8 MB/s 00:22:44.501 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:44.501 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:44.501 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:44.501 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:44.501 09:15:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:44.501 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:44.501 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:44.501 09:15:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:44.760 /dev/nbd1 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:44.760 1+0 records in 00:22:44.760 1+0 records out 00:22:44.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450526 s, 9.1 MB/s 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:44.760 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:45.018 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:45.018 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:45.018 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:45.018 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:45.018 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:22:45.018 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:45.018 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:45.276 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:45.276 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:45.276 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:45.276 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:45.276 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:45.276 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:45.276 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:45.276 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:45.276 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:45.276 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:45.534 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:45.534 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84919 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 84919 ']' 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 84919 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84919 00:22:45.535 killing process with pid 84919 00:22:45.535 Received shutdown signal, test time was about 60.000000 seconds 00:22:45.535 00:22:45.535 Latency(us) 00:22:45.535 [2024-11-06T09:15:22.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.535 [2024-11-06T09:15:22.070Z] =================================================================================================================== 00:22:45.535 [2024-11-06T09:15:22.070Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84919' 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 84919 00:22:45.535 [2024-11-06 09:15:21.994200] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:45.535 09:15:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 84919 00:22:46.101 [2024-11-06 09:15:22.423467] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:22:47.036 00:22:47.036 real 0m20.049s 00:22:47.036 user 0m25.012s 00:22:47.036 sys 0m2.256s 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:22:47.036 ************************************ 00:22:47.036 END TEST raid5f_rebuild_test 00:22:47.036 ************************************ 00:22:47.036 09:15:23 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:22:47.036 09:15:23 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:22:47.036 09:15:23 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:47.036 09:15:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:47.036 ************************************ 00:22:47.036 START TEST raid5f_rebuild_test_sb 00:22:47.036 ************************************ 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85423 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85423 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 85423 ']' 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:47.036 09:15:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.294 [2024-11-06 09:15:23.597613] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:22:47.294 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:47.294 Zero copy mechanism will not be used. 00:22:47.294 [2024-11-06 09:15:23.597783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85423 ] 00:22:47.294 [2024-11-06 09:15:23.782144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.562 [2024-11-06 09:15:23.909405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.830 [2024-11-06 09:15:24.112517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.830 [2024-11-06 09:15:24.112582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:48.088 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:48.088 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:22:48.088 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:48.088 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:48.088 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.088 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 BaseBdev1_malloc 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 [2024-11-06 09:15:24.647141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:48.347 [2024-11-06 09:15:24.647218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.347 [2024-11-06 09:15:24.647252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:48.347 [2024-11-06 09:15:24.647271] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.347 [2024-11-06 09:15:24.650031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.347 [2024-11-06 09:15:24.650082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:48.347 BaseBdev1 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 BaseBdev2_malloc 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 [2024-11-06 09:15:24.699293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:48.347 [2024-11-06 09:15:24.699385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.347 [2024-11-06 09:15:24.699415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:48.347 [2024-11-06 09:15:24.699436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.347 [2024-11-06 09:15:24.702231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.347 [2024-11-06 09:15:24.702293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:48.347 BaseBdev2 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 BaseBdev3_malloc 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 [2024-11-06 09:15:24.763495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:48.347 [2024-11-06 09:15:24.763563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.347 [2024-11-06 09:15:24.763595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:48.347 [2024-11-06 09:15:24.763615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.347 [2024-11-06 09:15:24.766280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.347 [2024-11-06 09:15:24.766330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:48.347 BaseBdev3 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 BaseBdev4_malloc 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 [2024-11-06 09:15:24.815929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:48.347 [2024-11-06 09:15:24.815995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.347 [2024-11-06 09:15:24.816023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:22:48.347 [2024-11-06 09:15:24.816041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.347 [2024-11-06 09:15:24.818713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.347 [2024-11-06 09:15:24.818765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:48.347 BaseBdev4 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 spare_malloc 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 spare_delay 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.347 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 [2024-11-06 09:15:24.879529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:48.347 [2024-11-06 09:15:24.879605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.347 [2024-11-06 09:15:24.879637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:48.347 [2024-11-06 09:15:24.879655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.605 [2024-11-06 09:15:24.882463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.605 [2024-11-06 09:15:24.882515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:48.605 spare 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.605 [2024-11-06 09:15:24.887594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:48.605 [2024-11-06 09:15:24.889981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:48.605 [2024-11-06 09:15:24.890071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:48.605 [2024-11-06 09:15:24.890151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:48.605 [2024-11-06 09:15:24.890394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:48.605 [2024-11-06 09:15:24.890429] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:22:48.605 [2024-11-06 09:15:24.890751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:48.605 [2024-11-06 09:15:24.897385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:48.605 [2024-11-06 09:15:24.897415] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:48.605 [2024-11-06 09:15:24.897656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.605 "name": "raid_bdev1", 00:22:48.605 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:22:48.605 "strip_size_kb": 64, 00:22:48.605 "state": "online", 00:22:48.605 "raid_level": "raid5f", 00:22:48.605 "superblock": true, 00:22:48.605 "num_base_bdevs": 4, 00:22:48.605 "num_base_bdevs_discovered": 4, 00:22:48.605 "num_base_bdevs_operational": 4, 00:22:48.605 "base_bdevs_list": [ 00:22:48.605 { 00:22:48.605 "name": "BaseBdev1", 00:22:48.605 "uuid": "7751dd14-cc0a-55c8-825c-fed279b35330", 00:22:48.605 "is_configured": true, 00:22:48.605 "data_offset": 2048, 00:22:48.605 "data_size": 63488 00:22:48.605 }, 00:22:48.605 { 00:22:48.605 "name": "BaseBdev2", 00:22:48.605 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:22:48.605 "is_configured": true, 00:22:48.605 "data_offset": 2048, 00:22:48.605 "data_size": 63488 00:22:48.605 }, 00:22:48.605 { 00:22:48.605 "name": "BaseBdev3", 00:22:48.605 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:22:48.605 "is_configured": true, 00:22:48.605 "data_offset": 2048, 00:22:48.605 "data_size": 63488 00:22:48.605 }, 00:22:48.605 { 00:22:48.605 "name": "BaseBdev4", 00:22:48.605 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:22:48.605 "is_configured": true, 00:22:48.605 "data_offset": 2048, 00:22:48.605 "data_size": 63488 00:22:48.605 } 00:22:48.605 ] 00:22:48.605 }' 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.605 09:15:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.170 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:49.170 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.171 [2024-11-06 09:15:25.405286] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:49.171 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:49.429 [2024-11-06 09:15:25.789196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:22:49.429 /dev/nbd0 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:49.429 1+0 records in 00:22:49.429 1+0 records out 00:22:49.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230309 s, 17.8 MB/s 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:22:49.429 09:15:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:22:49.994 496+0 records in 00:22:49.994 496+0 records out 00:22:49.994 97517568 bytes (98 MB, 93 MiB) copied, 0.593741 s, 164 MB/s 00:22:49.994 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:49.994 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:49.994 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:49.994 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:49.994 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:22:49.994 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:49.994 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:50.252 [2024-11-06 09:15:26.743462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.252 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:50.252 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:50.252 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:50.252 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:50.252 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:50.252 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:50.252 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:22:50.252 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:22:50.252 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:50.252 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.252 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.252 [2024-11-06 09:15:26.783045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.509 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.509 "name": "raid_bdev1", 00:22:50.509 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:22:50.509 "strip_size_kb": 64, 00:22:50.509 "state": "online", 00:22:50.509 "raid_level": "raid5f", 00:22:50.509 "superblock": true, 00:22:50.509 "num_base_bdevs": 4, 00:22:50.509 "num_base_bdevs_discovered": 3, 00:22:50.509 "num_base_bdevs_operational": 3, 00:22:50.509 "base_bdevs_list": [ 00:22:50.509 { 00:22:50.509 "name": null, 00:22:50.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.509 "is_configured": false, 00:22:50.509 "data_offset": 0, 00:22:50.509 "data_size": 63488 00:22:50.509 }, 00:22:50.509 { 00:22:50.509 "name": "BaseBdev2", 00:22:50.509 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:22:50.509 "is_configured": true, 00:22:50.509 "data_offset": 2048, 00:22:50.509 "data_size": 63488 00:22:50.509 }, 00:22:50.509 { 00:22:50.509 "name": "BaseBdev3", 00:22:50.509 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:22:50.509 "is_configured": true, 00:22:50.509 "data_offset": 2048, 00:22:50.509 "data_size": 63488 00:22:50.509 }, 00:22:50.510 { 00:22:50.510 "name": "BaseBdev4", 00:22:50.510 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:22:50.510 "is_configured": true, 00:22:50.510 "data_offset": 2048, 00:22:50.510 "data_size": 63488 00:22:50.510 } 00:22:50.510 ] 00:22:50.510 }' 00:22:50.510 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.510 09:15:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.825 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:50.825 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.825 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.825 [2024-11-06 09:15:27.271202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:50.825 [2024-11-06 09:15:27.285192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:22:50.825 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.825 09:15:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:50.825 [2024-11-06 09:15:27.294385] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:51.774 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.774 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:51.774 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:51.774 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:51.774 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:51.774 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.774 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.774 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.774 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.032 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.032 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:52.032 "name": "raid_bdev1", 00:22:52.032 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:22:52.032 "strip_size_kb": 64, 00:22:52.032 "state": "online", 00:22:52.032 "raid_level": "raid5f", 00:22:52.032 "superblock": true, 00:22:52.032 "num_base_bdevs": 4, 00:22:52.032 "num_base_bdevs_discovered": 4, 00:22:52.032 "num_base_bdevs_operational": 4, 00:22:52.032 "process": { 00:22:52.032 "type": "rebuild", 00:22:52.032 "target": "spare", 00:22:52.032 "progress": { 00:22:52.032 "blocks": 17280, 00:22:52.032 "percent": 9 00:22:52.032 } 00:22:52.032 }, 00:22:52.032 "base_bdevs_list": [ 00:22:52.032 { 00:22:52.032 "name": "spare", 00:22:52.032 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:22:52.032 "is_configured": true, 00:22:52.032 "data_offset": 2048, 00:22:52.032 "data_size": 63488 00:22:52.032 }, 00:22:52.032 { 00:22:52.032 "name": "BaseBdev2", 00:22:52.032 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:22:52.032 "is_configured": true, 00:22:52.032 "data_offset": 2048, 00:22:52.032 "data_size": 63488 00:22:52.032 }, 00:22:52.032 { 00:22:52.032 "name": "BaseBdev3", 00:22:52.032 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:22:52.033 "is_configured": true, 00:22:52.033 "data_offset": 2048, 00:22:52.033 "data_size": 63488 00:22:52.033 }, 00:22:52.033 { 00:22:52.033 "name": "BaseBdev4", 00:22:52.033 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:22:52.033 "is_configured": true, 00:22:52.033 "data_offset": 2048, 00:22:52.033 "data_size": 63488 00:22:52.033 } 00:22:52.033 ] 00:22:52.033 }' 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.033 [2024-11-06 09:15:28.460199] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:52.033 [2024-11-06 09:15:28.507064] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:52.033 [2024-11-06 09:15:28.507161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.033 [2024-11-06 09:15:28.507188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:52.033 [2024-11-06 09:15:28.507202] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.033 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.291 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:52.291 "name": "raid_bdev1", 00:22:52.291 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:22:52.291 "strip_size_kb": 64, 00:22:52.291 "state": "online", 00:22:52.291 "raid_level": "raid5f", 00:22:52.291 "superblock": true, 00:22:52.291 "num_base_bdevs": 4, 00:22:52.291 "num_base_bdevs_discovered": 3, 00:22:52.291 "num_base_bdevs_operational": 3, 00:22:52.291 "base_bdevs_list": [ 00:22:52.291 { 00:22:52.291 "name": null, 00:22:52.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.291 "is_configured": false, 00:22:52.291 "data_offset": 0, 00:22:52.291 "data_size": 63488 00:22:52.291 }, 00:22:52.291 { 00:22:52.291 "name": "BaseBdev2", 00:22:52.291 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:22:52.291 "is_configured": true, 00:22:52.291 "data_offset": 2048, 00:22:52.291 "data_size": 63488 00:22:52.291 }, 00:22:52.291 { 00:22:52.291 "name": "BaseBdev3", 00:22:52.291 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:22:52.291 "is_configured": true, 00:22:52.291 "data_offset": 2048, 00:22:52.291 "data_size": 63488 00:22:52.291 }, 00:22:52.291 { 00:22:52.291 "name": "BaseBdev4", 00:22:52.291 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:22:52.291 "is_configured": true, 00:22:52.291 "data_offset": 2048, 00:22:52.291 "data_size": 63488 00:22:52.291 } 00:22:52.291 ] 00:22:52.291 }' 00:22:52.291 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:52.291 09:15:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.549 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:52.549 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:52.549 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:52.549 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:52.549 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:52.549 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:52.549 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.549 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.549 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.549 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.807 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:52.807 "name": "raid_bdev1", 00:22:52.807 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:22:52.807 "strip_size_kb": 64, 00:22:52.807 "state": "online", 00:22:52.807 "raid_level": "raid5f", 00:22:52.807 "superblock": true, 00:22:52.807 "num_base_bdevs": 4, 00:22:52.807 "num_base_bdevs_discovered": 3, 00:22:52.807 "num_base_bdevs_operational": 3, 00:22:52.807 "base_bdevs_list": [ 00:22:52.807 { 00:22:52.807 "name": null, 00:22:52.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.807 "is_configured": false, 00:22:52.807 "data_offset": 0, 00:22:52.807 "data_size": 63488 00:22:52.807 }, 00:22:52.807 { 00:22:52.807 "name": "BaseBdev2", 00:22:52.807 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:22:52.807 "is_configured": true, 00:22:52.807 "data_offset": 2048, 00:22:52.807 "data_size": 63488 00:22:52.807 }, 00:22:52.807 { 00:22:52.807 "name": "BaseBdev3", 00:22:52.807 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:22:52.807 "is_configured": true, 00:22:52.807 "data_offset": 2048, 00:22:52.807 "data_size": 63488 00:22:52.807 }, 00:22:52.807 { 00:22:52.807 "name": "BaseBdev4", 00:22:52.807 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:22:52.807 "is_configured": true, 00:22:52.807 "data_offset": 2048, 00:22:52.807 "data_size": 63488 00:22:52.807 } 00:22:52.807 ] 00:22:52.807 }' 00:22:52.807 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:52.807 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:52.807 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:52.807 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:52.807 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:52.807 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.807 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.807 [2024-11-06 09:15:29.210069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:52.807 [2024-11-06 09:15:29.223258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:22:52.807 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.807 09:15:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:52.807 [2024-11-06 09:15:29.231968] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.741 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.741 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:53.741 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:53.741 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:53.741 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:53.741 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.741 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.741 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.741 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.741 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.998 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:53.998 "name": "raid_bdev1", 00:22:53.998 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:22:53.998 "strip_size_kb": 64, 00:22:53.998 "state": "online", 00:22:53.998 "raid_level": "raid5f", 00:22:53.999 "superblock": true, 00:22:53.999 "num_base_bdevs": 4, 00:22:53.999 "num_base_bdevs_discovered": 4, 00:22:53.999 "num_base_bdevs_operational": 4, 00:22:53.999 "process": { 00:22:53.999 "type": "rebuild", 00:22:53.999 "target": "spare", 00:22:53.999 "progress": { 00:22:53.999 "blocks": 17280, 00:22:53.999 "percent": 9 00:22:53.999 } 00:22:53.999 }, 00:22:53.999 "base_bdevs_list": [ 00:22:53.999 { 00:22:53.999 "name": "spare", 00:22:53.999 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:22:53.999 "is_configured": true, 00:22:53.999 "data_offset": 2048, 00:22:53.999 "data_size": 63488 00:22:53.999 }, 00:22:53.999 { 00:22:53.999 "name": "BaseBdev2", 00:22:53.999 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:22:53.999 "is_configured": true, 00:22:53.999 "data_offset": 2048, 00:22:53.999 "data_size": 63488 00:22:53.999 }, 00:22:53.999 { 00:22:53.999 "name": "BaseBdev3", 00:22:53.999 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:22:53.999 "is_configured": true, 00:22:53.999 "data_offset": 2048, 00:22:53.999 "data_size": 63488 00:22:53.999 }, 00:22:53.999 { 00:22:53.999 "name": "BaseBdev4", 00:22:53.999 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:22:53.999 "is_configured": true, 00:22:53.999 "data_offset": 2048, 00:22:53.999 "data_size": 63488 00:22:53.999 } 00:22:53.999 ] 00:22:53.999 }' 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:53.999 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=690 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:53.999 "name": "raid_bdev1", 00:22:53.999 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:22:53.999 "strip_size_kb": 64, 00:22:53.999 "state": "online", 00:22:53.999 "raid_level": "raid5f", 00:22:53.999 "superblock": true, 00:22:53.999 "num_base_bdevs": 4, 00:22:53.999 "num_base_bdevs_discovered": 4, 00:22:53.999 "num_base_bdevs_operational": 4, 00:22:53.999 "process": { 00:22:53.999 "type": "rebuild", 00:22:53.999 "target": "spare", 00:22:53.999 "progress": { 00:22:53.999 "blocks": 21120, 00:22:53.999 "percent": 11 00:22:53.999 } 00:22:53.999 }, 00:22:53.999 "base_bdevs_list": [ 00:22:53.999 { 00:22:53.999 "name": "spare", 00:22:53.999 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:22:53.999 "is_configured": true, 00:22:53.999 "data_offset": 2048, 00:22:53.999 "data_size": 63488 00:22:53.999 }, 00:22:53.999 { 00:22:53.999 "name": "BaseBdev2", 00:22:53.999 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:22:53.999 "is_configured": true, 00:22:53.999 "data_offset": 2048, 00:22:53.999 "data_size": 63488 00:22:53.999 }, 00:22:53.999 { 00:22:53.999 "name": "BaseBdev3", 00:22:53.999 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:22:53.999 "is_configured": true, 00:22:53.999 "data_offset": 2048, 00:22:53.999 "data_size": 63488 00:22:53.999 }, 00:22:53.999 { 00:22:53.999 "name": "BaseBdev4", 00:22:53.999 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:22:53.999 "is_configured": true, 00:22:53.999 "data_offset": 2048, 00:22:53.999 "data_size": 63488 00:22:53.999 } 00:22:53.999 ] 00:22:53.999 }' 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:53.999 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:54.257 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.257 09:15:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:55.190 "name": "raid_bdev1", 00:22:55.190 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:22:55.190 "strip_size_kb": 64, 00:22:55.190 "state": "online", 00:22:55.190 "raid_level": "raid5f", 00:22:55.190 "superblock": true, 00:22:55.190 "num_base_bdevs": 4, 00:22:55.190 "num_base_bdevs_discovered": 4, 00:22:55.190 "num_base_bdevs_operational": 4, 00:22:55.190 "process": { 00:22:55.190 "type": "rebuild", 00:22:55.190 "target": "spare", 00:22:55.190 "progress": { 00:22:55.190 "blocks": 42240, 00:22:55.190 "percent": 22 00:22:55.190 } 00:22:55.190 }, 00:22:55.190 "base_bdevs_list": [ 00:22:55.190 { 00:22:55.190 "name": "spare", 00:22:55.190 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:22:55.190 "is_configured": true, 00:22:55.190 "data_offset": 2048, 00:22:55.190 "data_size": 63488 00:22:55.190 }, 00:22:55.190 { 00:22:55.190 "name": "BaseBdev2", 00:22:55.190 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:22:55.190 "is_configured": true, 00:22:55.190 "data_offset": 2048, 00:22:55.190 "data_size": 63488 00:22:55.190 }, 00:22:55.190 { 00:22:55.190 "name": "BaseBdev3", 00:22:55.190 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:22:55.190 "is_configured": true, 00:22:55.190 "data_offset": 2048, 00:22:55.190 "data_size": 63488 00:22:55.190 }, 00:22:55.190 { 00:22:55.190 "name": "BaseBdev4", 00:22:55.190 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:22:55.190 "is_configured": true, 00:22:55.190 "data_offset": 2048, 00:22:55.190 "data_size": 63488 00:22:55.190 } 00:22:55.190 ] 00:22:55.190 }' 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:55.190 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:55.191 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.191 09:15:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:56.567 "name": "raid_bdev1", 00:22:56.567 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:22:56.567 "strip_size_kb": 64, 00:22:56.567 "state": "online", 00:22:56.567 "raid_level": "raid5f", 00:22:56.567 "superblock": true, 00:22:56.567 "num_base_bdevs": 4, 00:22:56.567 "num_base_bdevs_discovered": 4, 00:22:56.567 "num_base_bdevs_operational": 4, 00:22:56.567 "process": { 00:22:56.567 "type": "rebuild", 00:22:56.567 "target": "spare", 00:22:56.567 "progress": { 00:22:56.567 "blocks": 65280, 00:22:56.567 "percent": 34 00:22:56.567 } 00:22:56.567 }, 00:22:56.567 "base_bdevs_list": [ 00:22:56.567 { 00:22:56.567 "name": "spare", 00:22:56.567 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:22:56.567 "is_configured": true, 00:22:56.567 "data_offset": 2048, 00:22:56.567 "data_size": 63488 00:22:56.567 }, 00:22:56.567 { 00:22:56.567 "name": "BaseBdev2", 00:22:56.567 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:22:56.567 "is_configured": true, 00:22:56.567 "data_offset": 2048, 00:22:56.567 "data_size": 63488 00:22:56.567 }, 00:22:56.567 { 00:22:56.567 "name": "BaseBdev3", 00:22:56.567 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:22:56.567 "is_configured": true, 00:22:56.567 "data_offset": 2048, 00:22:56.567 "data_size": 63488 00:22:56.567 }, 00:22:56.567 { 00:22:56.567 "name": "BaseBdev4", 00:22:56.567 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:22:56.567 "is_configured": true, 00:22:56.567 "data_offset": 2048, 00:22:56.567 "data_size": 63488 00:22:56.567 } 00:22:56.567 ] 00:22:56.567 }' 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:56.567 09:15:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.501 "name": "raid_bdev1", 00:22:57.501 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:22:57.501 "strip_size_kb": 64, 00:22:57.501 "state": "online", 00:22:57.501 "raid_level": "raid5f", 00:22:57.501 "superblock": true, 00:22:57.501 "num_base_bdevs": 4, 00:22:57.501 "num_base_bdevs_discovered": 4, 00:22:57.501 "num_base_bdevs_operational": 4, 00:22:57.501 "process": { 00:22:57.501 "type": "rebuild", 00:22:57.501 "target": "spare", 00:22:57.501 "progress": { 00:22:57.501 "blocks": 86400, 00:22:57.501 "percent": 45 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 "base_bdevs_list": [ 00:22:57.501 { 00:22:57.501 "name": "spare", 00:22:57.501 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:22:57.501 "is_configured": true, 00:22:57.501 "data_offset": 2048, 00:22:57.501 "data_size": 63488 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "name": "BaseBdev2", 00:22:57.501 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:22:57.501 "is_configured": true, 00:22:57.501 "data_offset": 2048, 00:22:57.501 "data_size": 63488 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "name": "BaseBdev3", 00:22:57.501 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:22:57.501 "is_configured": true, 00:22:57.501 "data_offset": 2048, 00:22:57.501 "data_size": 63488 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "name": "BaseBdev4", 00:22:57.501 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:22:57.501 "is_configured": true, 00:22:57.501 "data_offset": 2048, 00:22:57.501 "data_size": 63488 00:22:57.501 } 00:22:57.501 ] 00:22:57.501 }' 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.501 09:15:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.501 09:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.501 09:15:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:58.876 "name": "raid_bdev1", 00:22:58.876 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:22:58.876 "strip_size_kb": 64, 00:22:58.876 "state": "online", 00:22:58.876 "raid_level": "raid5f", 00:22:58.876 "superblock": true, 00:22:58.876 "num_base_bdevs": 4, 00:22:58.876 "num_base_bdevs_discovered": 4, 00:22:58.876 "num_base_bdevs_operational": 4, 00:22:58.876 "process": { 00:22:58.876 "type": "rebuild", 00:22:58.876 "target": "spare", 00:22:58.876 "progress": { 00:22:58.876 "blocks": 109440, 00:22:58.876 "percent": 57 00:22:58.876 } 00:22:58.876 }, 00:22:58.876 "base_bdevs_list": [ 00:22:58.876 { 00:22:58.876 "name": "spare", 00:22:58.876 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:22:58.876 "is_configured": true, 00:22:58.876 "data_offset": 2048, 00:22:58.876 "data_size": 63488 00:22:58.876 }, 00:22:58.876 { 00:22:58.876 "name": "BaseBdev2", 00:22:58.876 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:22:58.876 "is_configured": true, 00:22:58.876 "data_offset": 2048, 00:22:58.876 "data_size": 63488 00:22:58.876 }, 00:22:58.876 { 00:22:58.876 "name": "BaseBdev3", 00:22:58.876 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:22:58.876 "is_configured": true, 00:22:58.876 "data_offset": 2048, 00:22:58.876 "data_size": 63488 00:22:58.876 }, 00:22:58.876 { 00:22:58.876 "name": "BaseBdev4", 00:22:58.876 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:22:58.876 "is_configured": true, 00:22:58.876 "data_offset": 2048, 00:22:58.876 "data_size": 63488 00:22:58.876 } 00:22:58.876 ] 00:22:58.876 }' 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.876 09:15:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:59.811 "name": "raid_bdev1", 00:22:59.811 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:22:59.811 "strip_size_kb": 64, 00:22:59.811 "state": "online", 00:22:59.811 "raid_level": "raid5f", 00:22:59.811 "superblock": true, 00:22:59.811 "num_base_bdevs": 4, 00:22:59.811 "num_base_bdevs_discovered": 4, 00:22:59.811 "num_base_bdevs_operational": 4, 00:22:59.811 "process": { 00:22:59.811 "type": "rebuild", 00:22:59.811 "target": "spare", 00:22:59.811 "progress": { 00:22:59.811 "blocks": 130560, 00:22:59.811 "percent": 68 00:22:59.811 } 00:22:59.811 }, 00:22:59.811 "base_bdevs_list": [ 00:22:59.811 { 00:22:59.811 "name": "spare", 00:22:59.811 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:22:59.811 "is_configured": true, 00:22:59.811 "data_offset": 2048, 00:22:59.811 "data_size": 63488 00:22:59.811 }, 00:22:59.811 { 00:22:59.811 "name": "BaseBdev2", 00:22:59.811 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:22:59.811 "is_configured": true, 00:22:59.811 "data_offset": 2048, 00:22:59.811 "data_size": 63488 00:22:59.811 }, 00:22:59.811 { 00:22:59.811 "name": "BaseBdev3", 00:22:59.811 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:22:59.811 "is_configured": true, 00:22:59.811 "data_offset": 2048, 00:22:59.811 "data_size": 63488 00:22:59.811 }, 00:22:59.811 { 00:22:59.811 "name": "BaseBdev4", 00:22:59.811 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:22:59.811 "is_configured": true, 00:22:59.811 "data_offset": 2048, 00:22:59.811 "data_size": 63488 00:22:59.811 } 00:22:59.811 ] 00:22:59.811 }' 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:59.811 09:15:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:01.184 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:01.184 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.184 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.184 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:01.184 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:01.184 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.184 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.184 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.184 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.184 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.184 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.184 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.184 "name": "raid_bdev1", 00:23:01.184 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:01.184 "strip_size_kb": 64, 00:23:01.184 "state": "online", 00:23:01.184 "raid_level": "raid5f", 00:23:01.184 "superblock": true, 00:23:01.184 "num_base_bdevs": 4, 00:23:01.184 "num_base_bdevs_discovered": 4, 00:23:01.184 "num_base_bdevs_operational": 4, 00:23:01.184 "process": { 00:23:01.184 "type": "rebuild", 00:23:01.184 "target": "spare", 00:23:01.184 "progress": { 00:23:01.184 "blocks": 153600, 00:23:01.184 "percent": 80 00:23:01.184 } 00:23:01.184 }, 00:23:01.184 "base_bdevs_list": [ 00:23:01.184 { 00:23:01.184 "name": "spare", 00:23:01.184 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:23:01.184 "is_configured": true, 00:23:01.184 "data_offset": 2048, 00:23:01.184 "data_size": 63488 00:23:01.184 }, 00:23:01.184 { 00:23:01.184 "name": "BaseBdev2", 00:23:01.184 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:01.184 "is_configured": true, 00:23:01.184 "data_offset": 2048, 00:23:01.184 "data_size": 63488 00:23:01.184 }, 00:23:01.184 { 00:23:01.184 "name": "BaseBdev3", 00:23:01.184 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:01.184 "is_configured": true, 00:23:01.184 "data_offset": 2048, 00:23:01.184 "data_size": 63488 00:23:01.184 }, 00:23:01.184 { 00:23:01.184 "name": "BaseBdev4", 00:23:01.184 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:01.184 "is_configured": true, 00:23:01.184 "data_offset": 2048, 00:23:01.185 "data_size": 63488 00:23:01.185 } 00:23:01.185 ] 00:23:01.185 }' 00:23:01.185 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.185 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:01.185 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.185 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:01.185 09:15:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:02.120 "name": "raid_bdev1", 00:23:02.120 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:02.120 "strip_size_kb": 64, 00:23:02.120 "state": "online", 00:23:02.120 "raid_level": "raid5f", 00:23:02.120 "superblock": true, 00:23:02.120 "num_base_bdevs": 4, 00:23:02.120 "num_base_bdevs_discovered": 4, 00:23:02.120 "num_base_bdevs_operational": 4, 00:23:02.120 "process": { 00:23:02.120 "type": "rebuild", 00:23:02.120 "target": "spare", 00:23:02.120 "progress": { 00:23:02.120 "blocks": 174720, 00:23:02.120 "percent": 91 00:23:02.120 } 00:23:02.120 }, 00:23:02.120 "base_bdevs_list": [ 00:23:02.120 { 00:23:02.120 "name": "spare", 00:23:02.120 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:23:02.120 "is_configured": true, 00:23:02.120 "data_offset": 2048, 00:23:02.120 "data_size": 63488 00:23:02.120 }, 00:23:02.120 { 00:23:02.120 "name": "BaseBdev2", 00:23:02.120 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:02.120 "is_configured": true, 00:23:02.120 "data_offset": 2048, 00:23:02.120 "data_size": 63488 00:23:02.120 }, 00:23:02.120 { 00:23:02.120 "name": "BaseBdev3", 00:23:02.120 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:02.120 "is_configured": true, 00:23:02.120 "data_offset": 2048, 00:23:02.120 "data_size": 63488 00:23:02.120 }, 00:23:02.120 { 00:23:02.120 "name": "BaseBdev4", 00:23:02.120 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:02.120 "is_configured": true, 00:23:02.120 "data_offset": 2048, 00:23:02.120 "data_size": 63488 00:23:02.120 } 00:23:02.120 ] 00:23:02.120 }' 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:02.120 09:15:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:03.053 [2024-11-06 09:15:39.325084] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:03.053 [2024-11-06 09:15:39.325171] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:03.053 [2024-11-06 09:15:39.325347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.311 "name": "raid_bdev1", 00:23:03.311 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:03.311 "strip_size_kb": 64, 00:23:03.311 "state": "online", 00:23:03.311 "raid_level": "raid5f", 00:23:03.311 "superblock": true, 00:23:03.311 "num_base_bdevs": 4, 00:23:03.311 "num_base_bdevs_discovered": 4, 00:23:03.311 "num_base_bdevs_operational": 4, 00:23:03.311 "base_bdevs_list": [ 00:23:03.311 { 00:23:03.311 "name": "spare", 00:23:03.311 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:23:03.311 "is_configured": true, 00:23:03.311 "data_offset": 2048, 00:23:03.311 "data_size": 63488 00:23:03.311 }, 00:23:03.311 { 00:23:03.311 "name": "BaseBdev2", 00:23:03.311 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:03.311 "is_configured": true, 00:23:03.311 "data_offset": 2048, 00:23:03.311 "data_size": 63488 00:23:03.311 }, 00:23:03.311 { 00:23:03.311 "name": "BaseBdev3", 00:23:03.311 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:03.311 "is_configured": true, 00:23:03.311 "data_offset": 2048, 00:23:03.311 "data_size": 63488 00:23:03.311 }, 00:23:03.311 { 00:23:03.311 "name": "BaseBdev4", 00:23:03.311 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:03.311 "is_configured": true, 00:23:03.311 "data_offset": 2048, 00:23:03.311 "data_size": 63488 00:23:03.311 } 00:23:03.311 ] 00:23:03.311 }' 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.311 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:03.569 "name": "raid_bdev1", 00:23:03.569 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:03.569 "strip_size_kb": 64, 00:23:03.569 "state": "online", 00:23:03.569 "raid_level": "raid5f", 00:23:03.569 "superblock": true, 00:23:03.569 "num_base_bdevs": 4, 00:23:03.569 "num_base_bdevs_discovered": 4, 00:23:03.569 "num_base_bdevs_operational": 4, 00:23:03.569 "base_bdevs_list": [ 00:23:03.569 { 00:23:03.569 "name": "spare", 00:23:03.569 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:23:03.569 "is_configured": true, 00:23:03.569 "data_offset": 2048, 00:23:03.569 "data_size": 63488 00:23:03.569 }, 00:23:03.569 { 00:23:03.569 "name": "BaseBdev2", 00:23:03.569 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:03.569 "is_configured": true, 00:23:03.569 "data_offset": 2048, 00:23:03.569 "data_size": 63488 00:23:03.569 }, 00:23:03.569 { 00:23:03.569 "name": "BaseBdev3", 00:23:03.569 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:03.569 "is_configured": true, 00:23:03.569 "data_offset": 2048, 00:23:03.569 "data_size": 63488 00:23:03.569 }, 00:23:03.569 { 00:23:03.569 "name": "BaseBdev4", 00:23:03.569 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:03.569 "is_configured": true, 00:23:03.569 "data_offset": 2048, 00:23:03.569 "data_size": 63488 00:23:03.569 } 00:23:03.569 ] 00:23:03.569 }' 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.569 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.570 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:03.570 09:15:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.570 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.570 "name": "raid_bdev1", 00:23:03.570 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:03.570 "strip_size_kb": 64, 00:23:03.570 "state": "online", 00:23:03.570 "raid_level": "raid5f", 00:23:03.570 "superblock": true, 00:23:03.570 "num_base_bdevs": 4, 00:23:03.570 "num_base_bdevs_discovered": 4, 00:23:03.570 "num_base_bdevs_operational": 4, 00:23:03.570 "base_bdevs_list": [ 00:23:03.570 { 00:23:03.570 "name": "spare", 00:23:03.570 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:23:03.570 "is_configured": true, 00:23:03.570 "data_offset": 2048, 00:23:03.570 "data_size": 63488 00:23:03.570 }, 00:23:03.570 { 00:23:03.570 "name": "BaseBdev2", 00:23:03.570 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:03.570 "is_configured": true, 00:23:03.570 "data_offset": 2048, 00:23:03.570 "data_size": 63488 00:23:03.570 }, 00:23:03.570 { 00:23:03.570 "name": "BaseBdev3", 00:23:03.570 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:03.570 "is_configured": true, 00:23:03.570 "data_offset": 2048, 00:23:03.570 "data_size": 63488 00:23:03.570 }, 00:23:03.570 { 00:23:03.570 "name": "BaseBdev4", 00:23:03.570 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:03.570 "is_configured": true, 00:23:03.570 "data_offset": 2048, 00:23:03.570 "data_size": 63488 00:23:03.570 } 00:23:03.570 ] 00:23:03.570 }' 00:23:03.570 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.570 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.136 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.137 [2024-11-06 09:15:40.501786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:04.137 [2024-11-06 09:15:40.501870] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:04.137 [2024-11-06 09:15:40.501966] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:04.137 [2024-11-06 09:15:40.502086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:04.137 [2024-11-06 09:15:40.502114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:04.137 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:04.403 /dev/nbd0 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.403 1+0 records in 00:23:04.403 1+0 records out 00:23:04.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563512 s, 7.3 MB/s 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:04.403 09:15:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:05.003 /dev/nbd1 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:05.003 1+0 records in 00:23:05.003 1+0 records out 00:23:05.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036819 s, 11.1 MB/s 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:05.003 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:05.261 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:05.261 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:05.261 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:05.261 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:05.261 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:05.261 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:05.261 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:05.261 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:05.261 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:05.261 09:15:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.828 [2024-11-06 09:15:42.114887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:05.828 [2024-11-06 09:15:42.115092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.828 [2024-11-06 09:15:42.115138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:05.828 [2024-11-06 09:15:42.115155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.828 [2024-11-06 09:15:42.118044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.828 [2024-11-06 09:15:42.118213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:05.828 [2024-11-06 09:15:42.118344] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:05.828 [2024-11-06 09:15:42.118413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:05.828 [2024-11-06 09:15:42.118601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:05.828 [2024-11-06 09:15:42.118734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:05.828 [2024-11-06 09:15:42.118856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:05.828 spare 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.828 [2024-11-06 09:15:42.219017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:05.828 [2024-11-06 09:15:42.219257] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:05.828 [2024-11-06 09:15:42.219613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:23:05.828 [2024-11-06 09:15:42.226185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:05.828 [2024-11-06 09:15:42.226329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:05.828 [2024-11-06 09:15:42.226591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:05.828 "name": "raid_bdev1", 00:23:05.828 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:05.828 "strip_size_kb": 64, 00:23:05.828 "state": "online", 00:23:05.828 "raid_level": "raid5f", 00:23:05.828 "superblock": true, 00:23:05.828 "num_base_bdevs": 4, 00:23:05.828 "num_base_bdevs_discovered": 4, 00:23:05.828 "num_base_bdevs_operational": 4, 00:23:05.828 "base_bdevs_list": [ 00:23:05.828 { 00:23:05.828 "name": "spare", 00:23:05.828 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:23:05.828 "is_configured": true, 00:23:05.828 "data_offset": 2048, 00:23:05.828 "data_size": 63488 00:23:05.828 }, 00:23:05.828 { 00:23:05.828 "name": "BaseBdev2", 00:23:05.828 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:05.828 "is_configured": true, 00:23:05.828 "data_offset": 2048, 00:23:05.828 "data_size": 63488 00:23:05.828 }, 00:23:05.828 { 00:23:05.828 "name": "BaseBdev3", 00:23:05.828 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:05.828 "is_configured": true, 00:23:05.828 "data_offset": 2048, 00:23:05.828 "data_size": 63488 00:23:05.828 }, 00:23:05.828 { 00:23:05.828 "name": "BaseBdev4", 00:23:05.828 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:05.828 "is_configured": true, 00:23:05.828 "data_offset": 2048, 00:23:05.828 "data_size": 63488 00:23:05.828 } 00:23:05.828 ] 00:23:05.828 }' 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:05.828 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.394 "name": "raid_bdev1", 00:23:06.394 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:06.394 "strip_size_kb": 64, 00:23:06.394 "state": "online", 00:23:06.394 "raid_level": "raid5f", 00:23:06.394 "superblock": true, 00:23:06.394 "num_base_bdevs": 4, 00:23:06.394 "num_base_bdevs_discovered": 4, 00:23:06.394 "num_base_bdevs_operational": 4, 00:23:06.394 "base_bdevs_list": [ 00:23:06.394 { 00:23:06.394 "name": "spare", 00:23:06.394 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:23:06.394 "is_configured": true, 00:23:06.394 "data_offset": 2048, 00:23:06.394 "data_size": 63488 00:23:06.394 }, 00:23:06.394 { 00:23:06.394 "name": "BaseBdev2", 00:23:06.394 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:06.394 "is_configured": true, 00:23:06.394 "data_offset": 2048, 00:23:06.394 "data_size": 63488 00:23:06.394 }, 00:23:06.394 { 00:23:06.394 "name": "BaseBdev3", 00:23:06.394 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:06.394 "is_configured": true, 00:23:06.394 "data_offset": 2048, 00:23:06.394 "data_size": 63488 00:23:06.394 }, 00:23:06.394 { 00:23:06.394 "name": "BaseBdev4", 00:23:06.394 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:06.394 "is_configured": true, 00:23:06.394 "data_offset": 2048, 00:23:06.394 "data_size": 63488 00:23:06.394 } 00:23:06.394 ] 00:23:06.394 }' 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:06.394 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.395 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.395 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.395 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:06.395 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.653 [2024-11-06 09:15:42.962197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.653 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.654 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.654 09:15:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.654 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.654 "name": "raid_bdev1", 00:23:06.654 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:06.654 "strip_size_kb": 64, 00:23:06.654 "state": "online", 00:23:06.654 "raid_level": "raid5f", 00:23:06.654 "superblock": true, 00:23:06.654 "num_base_bdevs": 4, 00:23:06.654 "num_base_bdevs_discovered": 3, 00:23:06.654 "num_base_bdevs_operational": 3, 00:23:06.654 "base_bdevs_list": [ 00:23:06.654 { 00:23:06.654 "name": null, 00:23:06.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.654 "is_configured": false, 00:23:06.654 "data_offset": 0, 00:23:06.654 "data_size": 63488 00:23:06.654 }, 00:23:06.654 { 00:23:06.654 "name": "BaseBdev2", 00:23:06.654 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:06.654 "is_configured": true, 00:23:06.654 "data_offset": 2048, 00:23:06.654 "data_size": 63488 00:23:06.654 }, 00:23:06.654 { 00:23:06.654 "name": "BaseBdev3", 00:23:06.654 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:06.654 "is_configured": true, 00:23:06.654 "data_offset": 2048, 00:23:06.654 "data_size": 63488 00:23:06.654 }, 00:23:06.654 { 00:23:06.654 "name": "BaseBdev4", 00:23:06.654 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:06.654 "is_configured": true, 00:23:06.654 "data_offset": 2048, 00:23:06.654 "data_size": 63488 00:23:06.654 } 00:23:06.654 ] 00:23:06.654 }' 00:23:06.654 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.654 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.225 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:07.225 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.225 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:07.225 [2024-11-06 09:15:43.474417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:07.225 [2024-11-06 09:15:43.474666] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:07.225 [2024-11-06 09:15:43.474698] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:07.225 [2024-11-06 09:15:43.474744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:07.225 [2024-11-06 09:15:43.488066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:23:07.225 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.225 09:15:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:07.225 [2024-11-06 09:15:43.496805] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:08.160 "name": "raid_bdev1", 00:23:08.160 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:08.160 "strip_size_kb": 64, 00:23:08.160 "state": "online", 00:23:08.160 "raid_level": "raid5f", 00:23:08.160 "superblock": true, 00:23:08.160 "num_base_bdevs": 4, 00:23:08.160 "num_base_bdevs_discovered": 4, 00:23:08.160 "num_base_bdevs_operational": 4, 00:23:08.160 "process": { 00:23:08.160 "type": "rebuild", 00:23:08.160 "target": "spare", 00:23:08.160 "progress": { 00:23:08.160 "blocks": 17280, 00:23:08.160 "percent": 9 00:23:08.160 } 00:23:08.160 }, 00:23:08.160 "base_bdevs_list": [ 00:23:08.160 { 00:23:08.160 "name": "spare", 00:23:08.160 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:23:08.160 "is_configured": true, 00:23:08.160 "data_offset": 2048, 00:23:08.160 "data_size": 63488 00:23:08.160 }, 00:23:08.160 { 00:23:08.160 "name": "BaseBdev2", 00:23:08.160 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:08.160 "is_configured": true, 00:23:08.160 "data_offset": 2048, 00:23:08.160 "data_size": 63488 00:23:08.160 }, 00:23:08.160 { 00:23:08.160 "name": "BaseBdev3", 00:23:08.160 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:08.160 "is_configured": true, 00:23:08.160 "data_offset": 2048, 00:23:08.160 "data_size": 63488 00:23:08.160 }, 00:23:08.160 { 00:23:08.160 "name": "BaseBdev4", 00:23:08.160 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:08.160 "is_configured": true, 00:23:08.160 "data_offset": 2048, 00:23:08.160 "data_size": 63488 00:23:08.160 } 00:23:08.160 ] 00:23:08.160 }' 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.160 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.160 [2024-11-06 09:15:44.654169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:08.419 [2024-11-06 09:15:44.707923] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:08.419 [2024-11-06 09:15:44.708190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:08.419 [2024-11-06 09:15:44.708326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:08.419 [2024-11-06 09:15:44.708386] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.419 "name": "raid_bdev1", 00:23:08.419 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:08.419 "strip_size_kb": 64, 00:23:08.419 "state": "online", 00:23:08.419 "raid_level": "raid5f", 00:23:08.419 "superblock": true, 00:23:08.419 "num_base_bdevs": 4, 00:23:08.419 "num_base_bdevs_discovered": 3, 00:23:08.419 "num_base_bdevs_operational": 3, 00:23:08.419 "base_bdevs_list": [ 00:23:08.419 { 00:23:08.419 "name": null, 00:23:08.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.419 "is_configured": false, 00:23:08.419 "data_offset": 0, 00:23:08.419 "data_size": 63488 00:23:08.419 }, 00:23:08.419 { 00:23:08.419 "name": "BaseBdev2", 00:23:08.419 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:08.419 "is_configured": true, 00:23:08.419 "data_offset": 2048, 00:23:08.419 "data_size": 63488 00:23:08.419 }, 00:23:08.419 { 00:23:08.419 "name": "BaseBdev3", 00:23:08.419 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:08.419 "is_configured": true, 00:23:08.419 "data_offset": 2048, 00:23:08.419 "data_size": 63488 00:23:08.419 }, 00:23:08.419 { 00:23:08.419 "name": "BaseBdev4", 00:23:08.419 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:08.419 "is_configured": true, 00:23:08.419 "data_offset": 2048, 00:23:08.419 "data_size": 63488 00:23:08.419 } 00:23:08.419 ] 00:23:08.419 }' 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.419 09:15:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.987 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:08.987 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.987 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.987 [2024-11-06 09:15:45.263885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:08.987 [2024-11-06 09:15:45.263977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.987 [2024-11-06 09:15:45.264017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:08.987 [2024-11-06 09:15:45.264038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.987 [2024-11-06 09:15:45.264621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.987 [2024-11-06 09:15:45.264662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:08.987 [2024-11-06 09:15:45.264779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:08.987 [2024-11-06 09:15:45.264804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:08.987 [2024-11-06 09:15:45.264834] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:08.987 [2024-11-06 09:15:45.264873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:08.987 [2024-11-06 09:15:45.278204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:23:08.987 spare 00:23:08.987 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.987 09:15:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:08.987 [2024-11-06 09:15:45.287317] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:09.921 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:09.921 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:09.921 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:09.921 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:09.921 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:09.921 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:09.921 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.921 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.921 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.921 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.921 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:09.921 "name": "raid_bdev1", 00:23:09.921 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:09.921 "strip_size_kb": 64, 00:23:09.921 "state": "online", 00:23:09.921 "raid_level": "raid5f", 00:23:09.921 "superblock": true, 00:23:09.921 "num_base_bdevs": 4, 00:23:09.921 "num_base_bdevs_discovered": 4, 00:23:09.921 "num_base_bdevs_operational": 4, 00:23:09.921 "process": { 00:23:09.921 "type": "rebuild", 00:23:09.921 "target": "spare", 00:23:09.921 "progress": { 00:23:09.921 "blocks": 17280, 00:23:09.921 "percent": 9 00:23:09.921 } 00:23:09.921 }, 00:23:09.921 "base_bdevs_list": [ 00:23:09.922 { 00:23:09.922 "name": "spare", 00:23:09.922 "uuid": "86d5d167-4eb1-50f1-811e-8bc2966af1f7", 00:23:09.922 "is_configured": true, 00:23:09.922 "data_offset": 2048, 00:23:09.922 "data_size": 63488 00:23:09.922 }, 00:23:09.922 { 00:23:09.922 "name": "BaseBdev2", 00:23:09.922 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:09.922 "is_configured": true, 00:23:09.922 "data_offset": 2048, 00:23:09.922 "data_size": 63488 00:23:09.922 }, 00:23:09.922 { 00:23:09.922 "name": "BaseBdev3", 00:23:09.922 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:09.922 "is_configured": true, 00:23:09.922 "data_offset": 2048, 00:23:09.922 "data_size": 63488 00:23:09.922 }, 00:23:09.922 { 00:23:09.922 "name": "BaseBdev4", 00:23:09.922 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:09.922 "is_configured": true, 00:23:09.922 "data_offset": 2048, 00:23:09.922 "data_size": 63488 00:23:09.922 } 00:23:09.922 ] 00:23:09.922 }' 00:23:09.922 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:09.922 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:09.922 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:09.922 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:09.922 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:09.922 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.922 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.922 [2024-11-06 09:15:46.444989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:10.198 [2024-11-06 09:15:46.500059] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:10.198 [2024-11-06 09:15:46.500161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.198 [2024-11-06 09:15:46.500192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:10.198 [2024-11-06 09:15:46.500203] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.198 "name": "raid_bdev1", 00:23:10.198 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:10.198 "strip_size_kb": 64, 00:23:10.198 "state": "online", 00:23:10.198 "raid_level": "raid5f", 00:23:10.198 "superblock": true, 00:23:10.198 "num_base_bdevs": 4, 00:23:10.198 "num_base_bdevs_discovered": 3, 00:23:10.198 "num_base_bdevs_operational": 3, 00:23:10.198 "base_bdevs_list": [ 00:23:10.198 { 00:23:10.198 "name": null, 00:23:10.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.198 "is_configured": false, 00:23:10.198 "data_offset": 0, 00:23:10.198 "data_size": 63488 00:23:10.198 }, 00:23:10.198 { 00:23:10.198 "name": "BaseBdev2", 00:23:10.198 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:10.198 "is_configured": true, 00:23:10.198 "data_offset": 2048, 00:23:10.198 "data_size": 63488 00:23:10.198 }, 00:23:10.198 { 00:23:10.198 "name": "BaseBdev3", 00:23:10.198 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:10.198 "is_configured": true, 00:23:10.198 "data_offset": 2048, 00:23:10.198 "data_size": 63488 00:23:10.198 }, 00:23:10.198 { 00:23:10.198 "name": "BaseBdev4", 00:23:10.198 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:10.198 "is_configured": true, 00:23:10.198 "data_offset": 2048, 00:23:10.198 "data_size": 63488 00:23:10.198 } 00:23:10.198 ] 00:23:10.198 }' 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.198 09:15:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:10.787 "name": "raid_bdev1", 00:23:10.787 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:10.787 "strip_size_kb": 64, 00:23:10.787 "state": "online", 00:23:10.787 "raid_level": "raid5f", 00:23:10.787 "superblock": true, 00:23:10.787 "num_base_bdevs": 4, 00:23:10.787 "num_base_bdevs_discovered": 3, 00:23:10.787 "num_base_bdevs_operational": 3, 00:23:10.787 "base_bdevs_list": [ 00:23:10.787 { 00:23:10.787 "name": null, 00:23:10.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.787 "is_configured": false, 00:23:10.787 "data_offset": 0, 00:23:10.787 "data_size": 63488 00:23:10.787 }, 00:23:10.787 { 00:23:10.787 "name": "BaseBdev2", 00:23:10.787 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:10.787 "is_configured": true, 00:23:10.787 "data_offset": 2048, 00:23:10.787 "data_size": 63488 00:23:10.787 }, 00:23:10.787 { 00:23:10.787 "name": "BaseBdev3", 00:23:10.787 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:10.787 "is_configured": true, 00:23:10.787 "data_offset": 2048, 00:23:10.787 "data_size": 63488 00:23:10.787 }, 00:23:10.787 { 00:23:10.787 "name": "BaseBdev4", 00:23:10.787 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:10.787 "is_configured": true, 00:23:10.787 "data_offset": 2048, 00:23:10.787 "data_size": 63488 00:23:10.787 } 00:23:10.787 ] 00:23:10.787 }' 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:10.787 [2024-11-06 09:15:47.232227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:10.787 [2024-11-06 09:15:47.232308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.787 [2024-11-06 09:15:47.232342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:23:10.787 [2024-11-06 09:15:47.232358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.787 [2024-11-06 09:15:47.232975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.787 [2024-11-06 09:15:47.233009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:10.787 [2024-11-06 09:15:47.233111] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:10.787 [2024-11-06 09:15:47.233132] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:10.787 [2024-11-06 09:15:47.233146] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:10.787 [2024-11-06 09:15:47.233158] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:10.787 BaseBdev1 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.787 09:15:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.722 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.981 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.981 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:11.981 "name": "raid_bdev1", 00:23:11.981 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:11.981 "strip_size_kb": 64, 00:23:11.981 "state": "online", 00:23:11.981 "raid_level": "raid5f", 00:23:11.981 "superblock": true, 00:23:11.981 "num_base_bdevs": 4, 00:23:11.981 "num_base_bdevs_discovered": 3, 00:23:11.981 "num_base_bdevs_operational": 3, 00:23:11.981 "base_bdevs_list": [ 00:23:11.981 { 00:23:11.981 "name": null, 00:23:11.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.981 "is_configured": false, 00:23:11.981 "data_offset": 0, 00:23:11.981 "data_size": 63488 00:23:11.981 }, 00:23:11.981 { 00:23:11.981 "name": "BaseBdev2", 00:23:11.981 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:11.981 "is_configured": true, 00:23:11.981 "data_offset": 2048, 00:23:11.981 "data_size": 63488 00:23:11.981 }, 00:23:11.981 { 00:23:11.981 "name": "BaseBdev3", 00:23:11.981 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:11.981 "is_configured": true, 00:23:11.981 "data_offset": 2048, 00:23:11.981 "data_size": 63488 00:23:11.981 }, 00:23:11.981 { 00:23:11.981 "name": "BaseBdev4", 00:23:11.981 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:11.981 "is_configured": true, 00:23:11.981 "data_offset": 2048, 00:23:11.981 "data_size": 63488 00:23:11.981 } 00:23:11.981 ] 00:23:11.981 }' 00:23:11.981 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:11.981 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.239 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:12.239 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:12.239 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:12.239 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:12.239 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:12.239 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:12.239 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.239 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.239 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.498 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.498 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:12.498 "name": "raid_bdev1", 00:23:12.498 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:12.498 "strip_size_kb": 64, 00:23:12.498 "state": "online", 00:23:12.498 "raid_level": "raid5f", 00:23:12.498 "superblock": true, 00:23:12.498 "num_base_bdevs": 4, 00:23:12.498 "num_base_bdevs_discovered": 3, 00:23:12.498 "num_base_bdevs_operational": 3, 00:23:12.498 "base_bdevs_list": [ 00:23:12.498 { 00:23:12.498 "name": null, 00:23:12.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.498 "is_configured": false, 00:23:12.498 "data_offset": 0, 00:23:12.498 "data_size": 63488 00:23:12.498 }, 00:23:12.498 { 00:23:12.498 "name": "BaseBdev2", 00:23:12.498 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:12.498 "is_configured": true, 00:23:12.498 "data_offset": 2048, 00:23:12.498 "data_size": 63488 00:23:12.498 }, 00:23:12.498 { 00:23:12.498 "name": "BaseBdev3", 00:23:12.498 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:12.498 "is_configured": true, 00:23:12.498 "data_offset": 2048, 00:23:12.498 "data_size": 63488 00:23:12.498 }, 00:23:12.498 { 00:23:12.498 "name": "BaseBdev4", 00:23:12.498 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:12.498 "is_configured": true, 00:23:12.498 "data_offset": 2048, 00:23:12.499 "data_size": 63488 00:23:12.499 } 00:23:12.499 ] 00:23:12.499 }' 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:12.499 [2024-11-06 09:15:48.932859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:12.499 [2024-11-06 09:15:48.933117] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:12.499 [2024-11-06 09:15:48.933153] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:12.499 request: 00:23:12.499 { 00:23:12.499 "base_bdev": "BaseBdev1", 00:23:12.499 "raid_bdev": "raid_bdev1", 00:23:12.499 "method": "bdev_raid_add_base_bdev", 00:23:12.499 "req_id": 1 00:23:12.499 } 00:23:12.499 Got JSON-RPC error response 00:23:12.499 response: 00:23:12.499 { 00:23:12.499 "code": -22, 00:23:12.499 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:12.499 } 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:12.499 09:15:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.434 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.693 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:13.693 "name": "raid_bdev1", 00:23:13.693 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:13.693 "strip_size_kb": 64, 00:23:13.693 "state": "online", 00:23:13.693 "raid_level": "raid5f", 00:23:13.693 "superblock": true, 00:23:13.693 "num_base_bdevs": 4, 00:23:13.693 "num_base_bdevs_discovered": 3, 00:23:13.693 "num_base_bdevs_operational": 3, 00:23:13.693 "base_bdevs_list": [ 00:23:13.693 { 00:23:13.693 "name": null, 00:23:13.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.693 "is_configured": false, 00:23:13.693 "data_offset": 0, 00:23:13.693 "data_size": 63488 00:23:13.693 }, 00:23:13.693 { 00:23:13.693 "name": "BaseBdev2", 00:23:13.693 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:13.693 "is_configured": true, 00:23:13.693 "data_offset": 2048, 00:23:13.693 "data_size": 63488 00:23:13.693 }, 00:23:13.693 { 00:23:13.693 "name": "BaseBdev3", 00:23:13.693 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:13.693 "is_configured": true, 00:23:13.693 "data_offset": 2048, 00:23:13.693 "data_size": 63488 00:23:13.693 }, 00:23:13.693 { 00:23:13.693 "name": "BaseBdev4", 00:23:13.693 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:13.693 "is_configured": true, 00:23:13.693 "data_offset": 2048, 00:23:13.693 "data_size": 63488 00:23:13.693 } 00:23:13.693 ] 00:23:13.693 }' 00:23:13.693 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:13.693 09:15:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:13.953 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:13.953 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:13.953 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:13.953 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:13.953 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:13.953 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:13.953 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.953 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.953 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:14.212 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.212 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:14.212 "name": "raid_bdev1", 00:23:14.212 "uuid": "64f1907a-7cd8-49b7-a1fe-96ea26964359", 00:23:14.212 "strip_size_kb": 64, 00:23:14.212 "state": "online", 00:23:14.212 "raid_level": "raid5f", 00:23:14.212 "superblock": true, 00:23:14.212 "num_base_bdevs": 4, 00:23:14.212 "num_base_bdevs_discovered": 3, 00:23:14.212 "num_base_bdevs_operational": 3, 00:23:14.212 "base_bdevs_list": [ 00:23:14.212 { 00:23:14.212 "name": null, 00:23:14.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:14.212 "is_configured": false, 00:23:14.212 "data_offset": 0, 00:23:14.212 "data_size": 63488 00:23:14.212 }, 00:23:14.212 { 00:23:14.212 "name": "BaseBdev2", 00:23:14.212 "uuid": "9ad9f331-2a79-5527-b310-cf9d242763ce", 00:23:14.212 "is_configured": true, 00:23:14.213 "data_offset": 2048, 00:23:14.213 "data_size": 63488 00:23:14.213 }, 00:23:14.213 { 00:23:14.213 "name": "BaseBdev3", 00:23:14.213 "uuid": "a4b6a282-4c8d-5019-aa69-4bcd88072d86", 00:23:14.213 "is_configured": true, 00:23:14.213 "data_offset": 2048, 00:23:14.213 "data_size": 63488 00:23:14.213 }, 00:23:14.213 { 00:23:14.213 "name": "BaseBdev4", 00:23:14.213 "uuid": "87eb50b1-0152-5c4d-a7a1-457ec81938ab", 00:23:14.213 "is_configured": true, 00:23:14.213 "data_offset": 2048, 00:23:14.213 "data_size": 63488 00:23:14.213 } 00:23:14.213 ] 00:23:14.213 }' 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85423 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 85423 ']' 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 85423 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85423 00:23:14.213 killing process with pid 85423 00:23:14.213 Received shutdown signal, test time was about 60.000000 seconds 00:23:14.213 00:23:14.213 Latency(us) 00:23:14.213 [2024-11-06T09:15:50.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.213 [2024-11-06T09:15:50.748Z] =================================================================================================================== 00:23:14.213 [2024-11-06T09:15:50.748Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85423' 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 85423 00:23:14.213 [2024-11-06 09:15:50.670912] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:14.213 09:15:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 85423 00:23:14.213 [2024-11-06 09:15:50.671058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:14.213 [2024-11-06 09:15:50.671157] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:14.213 [2024-11-06 09:15:50.671177] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:14.780 [2024-11-06 09:15:51.106166] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:15.716 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:23:15.716 00:23:15.716 real 0m28.645s 00:23:15.716 user 0m37.335s 00:23:15.716 sys 0m2.899s 00:23:15.716 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:15.716 ************************************ 00:23:15.716 END TEST raid5f_rebuild_test_sb 00:23:15.716 ************************************ 00:23:15.716 09:15:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.716 09:15:52 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:23:15.716 09:15:52 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:23:15.716 09:15:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:15.716 09:15:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:15.716 09:15:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:15.716 ************************************ 00:23:15.716 START TEST raid_state_function_test_sb_4k 00:23:15.716 ************************************ 00:23:15.716 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:23:15.716 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:15.716 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:15.716 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:15.716 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:15.716 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:15.716 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.716 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:15.716 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:15.717 Process raid pid: 86245 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86245 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86245' 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86245 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86245 ']' 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.717 09:15:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:15.976 [2024-11-06 09:15:52.303608] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:23:15.976 [2024-11-06 09:15:52.303794] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.976 [2024-11-06 09:15:52.489084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.235 [2024-11-06 09:15:52.623733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.493 [2024-11-06 09:15:52.834550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:16.493 [2024-11-06 09:15:52.834620] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:17.059 [2024-11-06 09:15:53.412105] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:17.059 [2024-11-06 09:15:53.412184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:17.059 [2024-11-06 09:15:53.412205] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:17.059 [2024-11-06 09:15:53.412225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.059 "name": "Existed_Raid", 00:23:17.059 "uuid": "e37f574d-dc0c-4ff5-9203-479a00cf21b2", 00:23:17.059 "strip_size_kb": 0, 00:23:17.059 "state": "configuring", 00:23:17.059 "raid_level": "raid1", 00:23:17.059 "superblock": true, 00:23:17.059 "num_base_bdevs": 2, 00:23:17.059 "num_base_bdevs_discovered": 0, 00:23:17.059 "num_base_bdevs_operational": 2, 00:23:17.059 "base_bdevs_list": [ 00:23:17.059 { 00:23:17.059 "name": "BaseBdev1", 00:23:17.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.059 "is_configured": false, 00:23:17.059 "data_offset": 0, 00:23:17.059 "data_size": 0 00:23:17.059 }, 00:23:17.059 { 00:23:17.059 "name": "BaseBdev2", 00:23:17.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.059 "is_configured": false, 00:23:17.059 "data_offset": 0, 00:23:17.059 "data_size": 0 00:23:17.059 } 00:23:17.059 ] 00:23:17.059 }' 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.059 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:17.626 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:17.626 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.626 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:17.626 [2024-11-06 09:15:53.952109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:17.626 [2024-11-06 09:15:53.952174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:17.626 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.626 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:17.626 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.626 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:17.626 [2024-11-06 09:15:53.960090] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:17.626 [2024-11-06 09:15:53.960143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:17.626 [2024-11-06 09:15:53.960159] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:17.626 [2024-11-06 09:15:53.960178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:17.626 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.626 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:23:17.626 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.626 09:15:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:17.626 [2024-11-06 09:15:54.008606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:17.626 BaseBdev1 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:17.626 [ 00:23:17.626 { 00:23:17.626 "name": "BaseBdev1", 00:23:17.626 "aliases": [ 00:23:17.626 "f524a593-3f60-48a7-94d5-e2037599f34e" 00:23:17.626 ], 00:23:17.626 "product_name": "Malloc disk", 00:23:17.626 "block_size": 4096, 00:23:17.626 "num_blocks": 8192, 00:23:17.626 "uuid": "f524a593-3f60-48a7-94d5-e2037599f34e", 00:23:17.626 "assigned_rate_limits": { 00:23:17.626 "rw_ios_per_sec": 0, 00:23:17.626 "rw_mbytes_per_sec": 0, 00:23:17.626 "r_mbytes_per_sec": 0, 00:23:17.626 "w_mbytes_per_sec": 0 00:23:17.626 }, 00:23:17.626 "claimed": true, 00:23:17.626 "claim_type": "exclusive_write", 00:23:17.626 "zoned": false, 00:23:17.626 "supported_io_types": { 00:23:17.626 "read": true, 00:23:17.626 "write": true, 00:23:17.626 "unmap": true, 00:23:17.626 "flush": true, 00:23:17.626 "reset": true, 00:23:17.626 "nvme_admin": false, 00:23:17.626 "nvme_io": false, 00:23:17.626 "nvme_io_md": false, 00:23:17.626 "write_zeroes": true, 00:23:17.626 "zcopy": true, 00:23:17.626 "get_zone_info": false, 00:23:17.626 "zone_management": false, 00:23:17.626 "zone_append": false, 00:23:17.626 "compare": false, 00:23:17.626 "compare_and_write": false, 00:23:17.626 "abort": true, 00:23:17.626 "seek_hole": false, 00:23:17.626 "seek_data": false, 00:23:17.626 "copy": true, 00:23:17.626 "nvme_iov_md": false 00:23:17.626 }, 00:23:17.626 "memory_domains": [ 00:23:17.626 { 00:23:17.626 "dma_device_id": "system", 00:23:17.626 "dma_device_type": 1 00:23:17.626 }, 00:23:17.626 { 00:23:17.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:17.626 "dma_device_type": 2 00:23:17.626 } 00:23:17.626 ], 00:23:17.626 "driver_specific": {} 00:23:17.626 } 00:23:17.626 ] 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:17.626 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:17.627 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:17.627 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.627 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:17.627 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.627 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:17.627 "name": "Existed_Raid", 00:23:17.627 "uuid": "70e502c5-ebef-4f87-a937-080c447ad8d0", 00:23:17.627 "strip_size_kb": 0, 00:23:17.627 "state": "configuring", 00:23:17.627 "raid_level": "raid1", 00:23:17.627 "superblock": true, 00:23:17.627 "num_base_bdevs": 2, 00:23:17.627 "num_base_bdevs_discovered": 1, 00:23:17.627 "num_base_bdevs_operational": 2, 00:23:17.627 "base_bdevs_list": [ 00:23:17.627 { 00:23:17.627 "name": "BaseBdev1", 00:23:17.627 "uuid": "f524a593-3f60-48a7-94d5-e2037599f34e", 00:23:17.627 "is_configured": true, 00:23:17.627 "data_offset": 256, 00:23:17.627 "data_size": 7936 00:23:17.627 }, 00:23:17.627 { 00:23:17.627 "name": "BaseBdev2", 00:23:17.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:17.627 "is_configured": false, 00:23:17.627 "data_offset": 0, 00:23:17.627 "data_size": 0 00:23:17.627 } 00:23:17.627 ] 00:23:17.627 }' 00:23:17.627 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:17.627 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:18.194 [2024-11-06 09:15:54.568739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:18.194 [2024-11-06 09:15:54.568797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:18.194 [2024-11-06 09:15:54.576790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:18.194 [2024-11-06 09:15:54.579337] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:18.194 [2024-11-06 09:15:54.579388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.194 "name": "Existed_Raid", 00:23:18.194 "uuid": "12272c25-6c25-430c-907d-206b0124d67e", 00:23:18.194 "strip_size_kb": 0, 00:23:18.194 "state": "configuring", 00:23:18.194 "raid_level": "raid1", 00:23:18.194 "superblock": true, 00:23:18.194 "num_base_bdevs": 2, 00:23:18.194 "num_base_bdevs_discovered": 1, 00:23:18.194 "num_base_bdevs_operational": 2, 00:23:18.194 "base_bdevs_list": [ 00:23:18.194 { 00:23:18.194 "name": "BaseBdev1", 00:23:18.194 "uuid": "f524a593-3f60-48a7-94d5-e2037599f34e", 00:23:18.194 "is_configured": true, 00:23:18.194 "data_offset": 256, 00:23:18.194 "data_size": 7936 00:23:18.194 }, 00:23:18.194 { 00:23:18.194 "name": "BaseBdev2", 00:23:18.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.194 "is_configured": false, 00:23:18.194 "data_offset": 0, 00:23:18.194 "data_size": 0 00:23:18.194 } 00:23:18.194 ] 00:23:18.194 }' 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.194 09:15:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:18.761 [2024-11-06 09:15:55.151700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:18.761 BaseBdev2 00:23:18.761 [2024-11-06 09:15:55.152251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:18.761 [2024-11-06 09:15:55.152277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:18.761 [2024-11-06 09:15:55.152610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:18.761 [2024-11-06 09:15:55.152863] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:18.761 [2024-11-06 09:15:55.152887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:18.761 [2024-11-06 09:15:55.153067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.761 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:18.761 [ 00:23:18.761 { 00:23:18.761 "name": "BaseBdev2", 00:23:18.761 "aliases": [ 00:23:18.761 "567b7a8a-2d74-4fef-80e6-ed3869532450" 00:23:18.761 ], 00:23:18.761 "product_name": "Malloc disk", 00:23:18.761 "block_size": 4096, 00:23:18.761 "num_blocks": 8192, 00:23:18.761 "uuid": "567b7a8a-2d74-4fef-80e6-ed3869532450", 00:23:18.761 "assigned_rate_limits": { 00:23:18.761 "rw_ios_per_sec": 0, 00:23:18.761 "rw_mbytes_per_sec": 0, 00:23:18.761 "r_mbytes_per_sec": 0, 00:23:18.761 "w_mbytes_per_sec": 0 00:23:18.761 }, 00:23:18.761 "claimed": true, 00:23:18.761 "claim_type": "exclusive_write", 00:23:18.761 "zoned": false, 00:23:18.761 "supported_io_types": { 00:23:18.761 "read": true, 00:23:18.761 "write": true, 00:23:18.761 "unmap": true, 00:23:18.761 "flush": true, 00:23:18.761 "reset": true, 00:23:18.761 "nvme_admin": false, 00:23:18.761 "nvme_io": false, 00:23:18.761 "nvme_io_md": false, 00:23:18.761 "write_zeroes": true, 00:23:18.761 "zcopy": true, 00:23:18.761 "get_zone_info": false, 00:23:18.761 "zone_management": false, 00:23:18.761 "zone_append": false, 00:23:18.761 "compare": false, 00:23:18.761 "compare_and_write": false, 00:23:18.762 "abort": true, 00:23:18.762 "seek_hole": false, 00:23:18.762 "seek_data": false, 00:23:18.762 "copy": true, 00:23:18.762 "nvme_iov_md": false 00:23:18.762 }, 00:23:18.762 "memory_domains": [ 00:23:18.762 { 00:23:18.762 "dma_device_id": "system", 00:23:18.762 "dma_device_type": 1 00:23:18.762 }, 00:23:18.762 { 00:23:18.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:18.762 "dma_device_type": 2 00:23:18.762 } 00:23:18.762 ], 00:23:18.762 "driver_specific": {} 00:23:18.762 } 00:23:18.762 ] 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:18.762 "name": "Existed_Raid", 00:23:18.762 "uuid": "12272c25-6c25-430c-907d-206b0124d67e", 00:23:18.762 "strip_size_kb": 0, 00:23:18.762 "state": "online", 00:23:18.762 "raid_level": "raid1", 00:23:18.762 "superblock": true, 00:23:18.762 "num_base_bdevs": 2, 00:23:18.762 "num_base_bdevs_discovered": 2, 00:23:18.762 "num_base_bdevs_operational": 2, 00:23:18.762 "base_bdevs_list": [ 00:23:18.762 { 00:23:18.762 "name": "BaseBdev1", 00:23:18.762 "uuid": "f524a593-3f60-48a7-94d5-e2037599f34e", 00:23:18.762 "is_configured": true, 00:23:18.762 "data_offset": 256, 00:23:18.762 "data_size": 7936 00:23:18.762 }, 00:23:18.762 { 00:23:18.762 "name": "BaseBdev2", 00:23:18.762 "uuid": "567b7a8a-2d74-4fef-80e6-ed3869532450", 00:23:18.762 "is_configured": true, 00:23:18.762 "data_offset": 256, 00:23:18.762 "data_size": 7936 00:23:18.762 } 00:23:18.762 ] 00:23:18.762 }' 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:18.762 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:19.328 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:19.328 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:19.328 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:19.328 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:19.328 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:23:19.328 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:19.328 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:19.328 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.328 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:19.328 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:19.328 [2024-11-06 09:15:55.744291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:19.328 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.328 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:19.328 "name": "Existed_Raid", 00:23:19.328 "aliases": [ 00:23:19.328 "12272c25-6c25-430c-907d-206b0124d67e" 00:23:19.328 ], 00:23:19.328 "product_name": "Raid Volume", 00:23:19.328 "block_size": 4096, 00:23:19.328 "num_blocks": 7936, 00:23:19.328 "uuid": "12272c25-6c25-430c-907d-206b0124d67e", 00:23:19.328 "assigned_rate_limits": { 00:23:19.328 "rw_ios_per_sec": 0, 00:23:19.328 "rw_mbytes_per_sec": 0, 00:23:19.328 "r_mbytes_per_sec": 0, 00:23:19.328 "w_mbytes_per_sec": 0 00:23:19.328 }, 00:23:19.328 "claimed": false, 00:23:19.328 "zoned": false, 00:23:19.328 "supported_io_types": { 00:23:19.328 "read": true, 00:23:19.328 "write": true, 00:23:19.328 "unmap": false, 00:23:19.328 "flush": false, 00:23:19.328 "reset": true, 00:23:19.328 "nvme_admin": false, 00:23:19.328 "nvme_io": false, 00:23:19.328 "nvme_io_md": false, 00:23:19.328 "write_zeroes": true, 00:23:19.328 "zcopy": false, 00:23:19.328 "get_zone_info": false, 00:23:19.328 "zone_management": false, 00:23:19.328 "zone_append": false, 00:23:19.328 "compare": false, 00:23:19.328 "compare_and_write": false, 00:23:19.328 "abort": false, 00:23:19.328 "seek_hole": false, 00:23:19.328 "seek_data": false, 00:23:19.328 "copy": false, 00:23:19.328 "nvme_iov_md": false 00:23:19.328 }, 00:23:19.328 "memory_domains": [ 00:23:19.328 { 00:23:19.328 "dma_device_id": "system", 00:23:19.328 "dma_device_type": 1 00:23:19.328 }, 00:23:19.328 { 00:23:19.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.328 "dma_device_type": 2 00:23:19.328 }, 00:23:19.328 { 00:23:19.328 "dma_device_id": "system", 00:23:19.328 "dma_device_type": 1 00:23:19.328 }, 00:23:19.328 { 00:23:19.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.328 "dma_device_type": 2 00:23:19.328 } 00:23:19.328 ], 00:23:19.328 "driver_specific": { 00:23:19.328 "raid": { 00:23:19.328 "uuid": "12272c25-6c25-430c-907d-206b0124d67e", 00:23:19.328 "strip_size_kb": 0, 00:23:19.328 "state": "online", 00:23:19.328 "raid_level": "raid1", 00:23:19.328 "superblock": true, 00:23:19.328 "num_base_bdevs": 2, 00:23:19.328 "num_base_bdevs_discovered": 2, 00:23:19.328 "num_base_bdevs_operational": 2, 00:23:19.328 "base_bdevs_list": [ 00:23:19.328 { 00:23:19.328 "name": "BaseBdev1", 00:23:19.328 "uuid": "f524a593-3f60-48a7-94d5-e2037599f34e", 00:23:19.328 "is_configured": true, 00:23:19.328 "data_offset": 256, 00:23:19.328 "data_size": 7936 00:23:19.328 }, 00:23:19.328 { 00:23:19.328 "name": "BaseBdev2", 00:23:19.328 "uuid": "567b7a8a-2d74-4fef-80e6-ed3869532450", 00:23:19.328 "is_configured": true, 00:23:19.328 "data_offset": 256, 00:23:19.328 "data_size": 7936 00:23:19.328 } 00:23:19.328 ] 00:23:19.328 } 00:23:19.328 } 00:23:19.328 }' 00:23:19.329 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:19.329 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:19.329 BaseBdev2' 00:23:19.329 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.587 09:15:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:19.587 [2024-11-06 09:15:55.996130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:19.587 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:19.588 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:19.588 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:19.588 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:19.588 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:19.588 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.588 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.588 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:19.588 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.845 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:19.845 "name": "Existed_Raid", 00:23:19.845 "uuid": "12272c25-6c25-430c-907d-206b0124d67e", 00:23:19.845 "strip_size_kb": 0, 00:23:19.845 "state": "online", 00:23:19.845 "raid_level": "raid1", 00:23:19.845 "superblock": true, 00:23:19.845 "num_base_bdevs": 2, 00:23:19.845 "num_base_bdevs_discovered": 1, 00:23:19.845 "num_base_bdevs_operational": 1, 00:23:19.845 "base_bdevs_list": [ 00:23:19.845 { 00:23:19.845 "name": null, 00:23:19.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.845 "is_configured": false, 00:23:19.845 "data_offset": 0, 00:23:19.845 "data_size": 7936 00:23:19.845 }, 00:23:19.845 { 00:23:19.845 "name": "BaseBdev2", 00:23:19.845 "uuid": "567b7a8a-2d74-4fef-80e6-ed3869532450", 00:23:19.845 "is_configured": true, 00:23:19.845 "data_offset": 256, 00:23:19.845 "data_size": 7936 00:23:19.845 } 00:23:19.845 ] 00:23:19.845 }' 00:23:19.845 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:19.845 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:20.102 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:20.102 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:20.102 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:20.102 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.102 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.102 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:20.102 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.102 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:20.102 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:20.102 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:20.102 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.102 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:20.103 [2024-11-06 09:15:56.633014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:20.103 [2024-11-06 09:15:56.633261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:20.361 [2024-11-06 09:15:56.715178] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:20.361 [2024-11-06 09:15:56.715444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:20.361 [2024-11-06 09:15:56.715479] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86245 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86245 ']' 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86245 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86245 00:23:20.361 killing process with pid 86245 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86245' 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86245 00:23:20.361 [2024-11-06 09:15:56.804957] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:20.361 09:15:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86245 00:23:20.361 [2024-11-06 09:15:56.819897] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:21.736 09:15:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:23:21.736 00:23:21.736 real 0m5.650s 00:23:21.736 user 0m8.621s 00:23:21.737 sys 0m0.808s 00:23:21.737 09:15:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:21.737 09:15:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:21.737 ************************************ 00:23:21.737 END TEST raid_state_function_test_sb_4k 00:23:21.737 ************************************ 00:23:21.737 09:15:57 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:23:21.737 09:15:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:23:21.737 09:15:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:21.737 09:15:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:21.737 ************************************ 00:23:21.737 START TEST raid_superblock_test_4k 00:23:21.737 ************************************ 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86503 00:23:21.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86503 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 86503 ']' 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:21.737 09:15:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:21.737 [2024-11-06 09:15:58.011053] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:23:21.737 [2024-11-06 09:15:58.011486] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86503 ] 00:23:21.737 [2024-11-06 09:15:58.202168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.995 [2024-11-06 09:15:58.353080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.253 [2024-11-06 09:15:58.558763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:22.253 [2024-11-06 09:15:58.558810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.523 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:22.781 malloc1 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:22.781 [2024-11-06 09:15:59.069954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:22.781 [2024-11-06 09:15:59.070028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.781 [2024-11-06 09:15:59.070059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:22.781 [2024-11-06 09:15:59.070074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.781 [2024-11-06 09:15:59.072864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.781 [2024-11-06 09:15:59.072922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:22.781 pt1 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:22.781 malloc2 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:22.781 [2024-11-06 09:15:59.121599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:22.781 [2024-11-06 09:15:59.121690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.781 [2024-11-06 09:15:59.121720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:22.781 [2024-11-06 09:15:59.121734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.781 [2024-11-06 09:15:59.124524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.781 [2024-11-06 09:15:59.124574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:22.781 pt2 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:22.781 [2024-11-06 09:15:59.129663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:22.781 [2024-11-06 09:15:59.132177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:22.781 [2024-11-06 09:15:59.132406] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:22.781 [2024-11-06 09:15:59.132430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:22.781 [2024-11-06 09:15:59.132720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:22.781 [2024-11-06 09:15:59.132946] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:22.781 [2024-11-06 09:15:59.132973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:22.781 [2024-11-06 09:15:59.133153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:22.781 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:22.782 "name": "raid_bdev1", 00:23:22.782 "uuid": "7558ea14-d830-4b50-8578-13b4b02164c8", 00:23:22.782 "strip_size_kb": 0, 00:23:22.782 "state": "online", 00:23:22.782 "raid_level": "raid1", 00:23:22.782 "superblock": true, 00:23:22.782 "num_base_bdevs": 2, 00:23:22.782 "num_base_bdevs_discovered": 2, 00:23:22.782 "num_base_bdevs_operational": 2, 00:23:22.782 "base_bdevs_list": [ 00:23:22.782 { 00:23:22.782 "name": "pt1", 00:23:22.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:22.782 "is_configured": true, 00:23:22.782 "data_offset": 256, 00:23:22.782 "data_size": 7936 00:23:22.782 }, 00:23:22.782 { 00:23:22.782 "name": "pt2", 00:23:22.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:22.782 "is_configured": true, 00:23:22.782 "data_offset": 256, 00:23:22.782 "data_size": 7936 00:23:22.782 } 00:23:22.782 ] 00:23:22.782 }' 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:22.782 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.348 [2024-11-06 09:15:59.654149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:23.348 "name": "raid_bdev1", 00:23:23.348 "aliases": [ 00:23:23.348 "7558ea14-d830-4b50-8578-13b4b02164c8" 00:23:23.348 ], 00:23:23.348 "product_name": "Raid Volume", 00:23:23.348 "block_size": 4096, 00:23:23.348 "num_blocks": 7936, 00:23:23.348 "uuid": "7558ea14-d830-4b50-8578-13b4b02164c8", 00:23:23.348 "assigned_rate_limits": { 00:23:23.348 "rw_ios_per_sec": 0, 00:23:23.348 "rw_mbytes_per_sec": 0, 00:23:23.348 "r_mbytes_per_sec": 0, 00:23:23.348 "w_mbytes_per_sec": 0 00:23:23.348 }, 00:23:23.348 "claimed": false, 00:23:23.348 "zoned": false, 00:23:23.348 "supported_io_types": { 00:23:23.348 "read": true, 00:23:23.348 "write": true, 00:23:23.348 "unmap": false, 00:23:23.348 "flush": false, 00:23:23.348 "reset": true, 00:23:23.348 "nvme_admin": false, 00:23:23.348 "nvme_io": false, 00:23:23.348 "nvme_io_md": false, 00:23:23.348 "write_zeroes": true, 00:23:23.348 "zcopy": false, 00:23:23.348 "get_zone_info": false, 00:23:23.348 "zone_management": false, 00:23:23.348 "zone_append": false, 00:23:23.348 "compare": false, 00:23:23.348 "compare_and_write": false, 00:23:23.348 "abort": false, 00:23:23.348 "seek_hole": false, 00:23:23.348 "seek_data": false, 00:23:23.348 "copy": false, 00:23:23.348 "nvme_iov_md": false 00:23:23.348 }, 00:23:23.348 "memory_domains": [ 00:23:23.348 { 00:23:23.348 "dma_device_id": "system", 00:23:23.348 "dma_device_type": 1 00:23:23.348 }, 00:23:23.348 { 00:23:23.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.348 "dma_device_type": 2 00:23:23.348 }, 00:23:23.348 { 00:23:23.348 "dma_device_id": "system", 00:23:23.348 "dma_device_type": 1 00:23:23.348 }, 00:23:23.348 { 00:23:23.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.348 "dma_device_type": 2 00:23:23.348 } 00:23:23.348 ], 00:23:23.348 "driver_specific": { 00:23:23.348 "raid": { 00:23:23.348 "uuid": "7558ea14-d830-4b50-8578-13b4b02164c8", 00:23:23.348 "strip_size_kb": 0, 00:23:23.348 "state": "online", 00:23:23.348 "raid_level": "raid1", 00:23:23.348 "superblock": true, 00:23:23.348 "num_base_bdevs": 2, 00:23:23.348 "num_base_bdevs_discovered": 2, 00:23:23.348 "num_base_bdevs_operational": 2, 00:23:23.348 "base_bdevs_list": [ 00:23:23.348 { 00:23:23.348 "name": "pt1", 00:23:23.348 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:23.348 "is_configured": true, 00:23:23.348 "data_offset": 256, 00:23:23.348 "data_size": 7936 00:23:23.348 }, 00:23:23.348 { 00:23:23.348 "name": "pt2", 00:23:23.348 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:23.348 "is_configured": true, 00:23:23.348 "data_offset": 256, 00:23:23.348 "data_size": 7936 00:23:23.348 } 00:23:23.348 ] 00:23:23.348 } 00:23:23.348 } 00:23:23.348 }' 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:23.348 pt2' 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.348 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.607 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:23:23.607 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:23:23.607 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:23.607 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.607 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:23.608 [2024-11-06 09:15:59.910124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7558ea14-d830-4b50-8578-13b4b02164c8 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 7558ea14-d830-4b50-8578-13b4b02164c8 ']' 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.608 [2024-11-06 09:15:59.961795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:23.608 [2024-11-06 09:15:59.961944] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:23.608 [2024-11-06 09:15:59.962135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.608 [2024-11-06 09:15:59.962307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.608 [2024-11-06 09:15:59.962474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:23.608 09:15:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.608 [2024-11-06 09:16:00.097868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:23.608 [2024-11-06 09:16:00.100302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:23.608 [2024-11-06 09:16:00.100397] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:23.608 [2024-11-06 09:16:00.100471] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:23.608 [2024-11-06 09:16:00.100498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:23.608 [2024-11-06 09:16:00.100514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:23.608 request: 00:23:23.608 { 00:23:23.608 "name": "raid_bdev1", 00:23:23.608 "raid_level": "raid1", 00:23:23.608 "base_bdevs": [ 00:23:23.608 "malloc1", 00:23:23.608 "malloc2" 00:23:23.608 ], 00:23:23.608 "superblock": false, 00:23:23.608 "method": "bdev_raid_create", 00:23:23.608 "req_id": 1 00:23:23.608 } 00:23:23.608 Got JSON-RPC error response 00:23:23.608 response: 00:23:23.608 { 00:23:23.608 "code": -17, 00:23:23.608 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:23.608 } 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.608 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.868 [2024-11-06 09:16:00.165859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:23.868 [2024-11-06 09:16:00.166050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.868 [2024-11-06 09:16:00.166116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:23.868 [2024-11-06 09:16:00.166304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.868 [2024-11-06 09:16:00.169126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.868 [2024-11-06 09:16:00.169291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:23.868 [2024-11-06 09:16:00.169498] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:23.868 [2024-11-06 09:16:00.169694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:23.868 pt1 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:23.868 "name": "raid_bdev1", 00:23:23.868 "uuid": "7558ea14-d830-4b50-8578-13b4b02164c8", 00:23:23.868 "strip_size_kb": 0, 00:23:23.868 "state": "configuring", 00:23:23.868 "raid_level": "raid1", 00:23:23.868 "superblock": true, 00:23:23.868 "num_base_bdevs": 2, 00:23:23.868 "num_base_bdevs_discovered": 1, 00:23:23.868 "num_base_bdevs_operational": 2, 00:23:23.868 "base_bdevs_list": [ 00:23:23.868 { 00:23:23.868 "name": "pt1", 00:23:23.868 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:23.868 "is_configured": true, 00:23:23.868 "data_offset": 256, 00:23:23.868 "data_size": 7936 00:23:23.868 }, 00:23:23.868 { 00:23:23.868 "name": null, 00:23:23.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:23.868 "is_configured": false, 00:23:23.868 "data_offset": 256, 00:23:23.868 "data_size": 7936 00:23:23.868 } 00:23:23.868 ] 00:23:23.868 }' 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:23.868 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:24.435 [2024-11-06 09:16:00.686189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:24.435 [2024-11-06 09:16:00.686272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.435 [2024-11-06 09:16:00.686302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:24.435 [2024-11-06 09:16:00.686319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.435 [2024-11-06 09:16:00.686947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.435 [2024-11-06 09:16:00.686987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:24.435 [2024-11-06 09:16:00.687086] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:24.435 [2024-11-06 09:16:00.687132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:24.435 [2024-11-06 09:16:00.687291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:24.435 [2024-11-06 09:16:00.687312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:24.435 [2024-11-06 09:16:00.687606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:24.435 [2024-11-06 09:16:00.687806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:24.435 [2024-11-06 09:16:00.687846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:24.435 [2024-11-06 09:16:00.688019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:24.435 pt2 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.435 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:24.435 "name": "raid_bdev1", 00:23:24.435 "uuid": "7558ea14-d830-4b50-8578-13b4b02164c8", 00:23:24.435 "strip_size_kb": 0, 00:23:24.435 "state": "online", 00:23:24.435 "raid_level": "raid1", 00:23:24.435 "superblock": true, 00:23:24.435 "num_base_bdevs": 2, 00:23:24.435 "num_base_bdevs_discovered": 2, 00:23:24.435 "num_base_bdevs_operational": 2, 00:23:24.435 "base_bdevs_list": [ 00:23:24.435 { 00:23:24.435 "name": "pt1", 00:23:24.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:24.435 "is_configured": true, 00:23:24.435 "data_offset": 256, 00:23:24.435 "data_size": 7936 00:23:24.435 }, 00:23:24.435 { 00:23:24.435 "name": "pt2", 00:23:24.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:24.435 "is_configured": true, 00:23:24.435 "data_offset": 256, 00:23:24.435 "data_size": 7936 00:23:24.435 } 00:23:24.435 ] 00:23:24.435 }' 00:23:24.436 09:16:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:24.436 09:16:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:24.694 [2024-11-06 09:16:01.174618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:24.694 "name": "raid_bdev1", 00:23:24.694 "aliases": [ 00:23:24.694 "7558ea14-d830-4b50-8578-13b4b02164c8" 00:23:24.694 ], 00:23:24.694 "product_name": "Raid Volume", 00:23:24.694 "block_size": 4096, 00:23:24.694 "num_blocks": 7936, 00:23:24.694 "uuid": "7558ea14-d830-4b50-8578-13b4b02164c8", 00:23:24.694 "assigned_rate_limits": { 00:23:24.694 "rw_ios_per_sec": 0, 00:23:24.694 "rw_mbytes_per_sec": 0, 00:23:24.694 "r_mbytes_per_sec": 0, 00:23:24.694 "w_mbytes_per_sec": 0 00:23:24.694 }, 00:23:24.694 "claimed": false, 00:23:24.694 "zoned": false, 00:23:24.694 "supported_io_types": { 00:23:24.694 "read": true, 00:23:24.694 "write": true, 00:23:24.694 "unmap": false, 00:23:24.694 "flush": false, 00:23:24.694 "reset": true, 00:23:24.694 "nvme_admin": false, 00:23:24.694 "nvme_io": false, 00:23:24.694 "nvme_io_md": false, 00:23:24.694 "write_zeroes": true, 00:23:24.694 "zcopy": false, 00:23:24.694 "get_zone_info": false, 00:23:24.694 "zone_management": false, 00:23:24.694 "zone_append": false, 00:23:24.694 "compare": false, 00:23:24.694 "compare_and_write": false, 00:23:24.694 "abort": false, 00:23:24.694 "seek_hole": false, 00:23:24.694 "seek_data": false, 00:23:24.694 "copy": false, 00:23:24.694 "nvme_iov_md": false 00:23:24.694 }, 00:23:24.694 "memory_domains": [ 00:23:24.694 { 00:23:24.694 "dma_device_id": "system", 00:23:24.694 "dma_device_type": 1 00:23:24.694 }, 00:23:24.694 { 00:23:24.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.694 "dma_device_type": 2 00:23:24.694 }, 00:23:24.694 { 00:23:24.694 "dma_device_id": "system", 00:23:24.694 "dma_device_type": 1 00:23:24.694 }, 00:23:24.694 { 00:23:24.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.694 "dma_device_type": 2 00:23:24.694 } 00:23:24.694 ], 00:23:24.694 "driver_specific": { 00:23:24.694 "raid": { 00:23:24.694 "uuid": "7558ea14-d830-4b50-8578-13b4b02164c8", 00:23:24.694 "strip_size_kb": 0, 00:23:24.694 "state": "online", 00:23:24.694 "raid_level": "raid1", 00:23:24.694 "superblock": true, 00:23:24.694 "num_base_bdevs": 2, 00:23:24.694 "num_base_bdevs_discovered": 2, 00:23:24.694 "num_base_bdevs_operational": 2, 00:23:24.694 "base_bdevs_list": [ 00:23:24.694 { 00:23:24.694 "name": "pt1", 00:23:24.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:24.694 "is_configured": true, 00:23:24.694 "data_offset": 256, 00:23:24.694 "data_size": 7936 00:23:24.694 }, 00:23:24.694 { 00:23:24.694 "name": "pt2", 00:23:24.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:24.694 "is_configured": true, 00:23:24.694 "data_offset": 256, 00:23:24.694 "data_size": 7936 00:23:24.694 } 00:23:24.694 ] 00:23:24.694 } 00:23:24.694 } 00:23:24.694 }' 00:23:24.694 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:24.952 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:24.952 pt2' 00:23:24.952 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:24.952 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:23:24.952 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:24.952 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:24.952 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.952 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:24.952 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:24.952 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:24.953 [2024-11-06 09:16:01.450614] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:24.953 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 7558ea14-d830-4b50-8578-13b4b02164c8 '!=' 7558ea14-d830-4b50-8578-13b4b02164c8 ']' 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:25.211 [2024-11-06 09:16:01.502381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.211 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.212 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:25.212 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.212 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:25.212 "name": "raid_bdev1", 00:23:25.212 "uuid": "7558ea14-d830-4b50-8578-13b4b02164c8", 00:23:25.212 "strip_size_kb": 0, 00:23:25.212 "state": "online", 00:23:25.212 "raid_level": "raid1", 00:23:25.212 "superblock": true, 00:23:25.212 "num_base_bdevs": 2, 00:23:25.212 "num_base_bdevs_discovered": 1, 00:23:25.212 "num_base_bdevs_operational": 1, 00:23:25.212 "base_bdevs_list": [ 00:23:25.212 { 00:23:25.212 "name": null, 00:23:25.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.212 "is_configured": false, 00:23:25.212 "data_offset": 0, 00:23:25.212 "data_size": 7936 00:23:25.212 }, 00:23:25.212 { 00:23:25.212 "name": "pt2", 00:23:25.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:25.212 "is_configured": true, 00:23:25.212 "data_offset": 256, 00:23:25.212 "data_size": 7936 00:23:25.212 } 00:23:25.212 ] 00:23:25.212 }' 00:23:25.212 09:16:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:25.212 09:16:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:25.777 [2024-11-06 09:16:02.022537] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:25.777 [2024-11-06 09:16:02.022572] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:25.777 [2024-11-06 09:16:02.022681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:25.777 [2024-11-06 09:16:02.022748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:25.777 [2024-11-06 09:16:02.022768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:25.777 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:25.778 [2024-11-06 09:16:02.090501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:25.778 [2024-11-06 09:16:02.090727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.778 [2024-11-06 09:16:02.090762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:25.778 [2024-11-06 09:16:02.090780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.778 [2024-11-06 09:16:02.093544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.778 [2024-11-06 09:16:02.093593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:25.778 [2024-11-06 09:16:02.093690] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:25.778 [2024-11-06 09:16:02.093751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:25.778 [2024-11-06 09:16:02.094018] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:25.778 [2024-11-06 09:16:02.094084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:25.778 [2024-11-06 09:16:02.094501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:25.778 [2024-11-06 09:16:02.094714] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:25.778 [2024-11-06 09:16:02.094731] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:25.778 [2024-11-06 09:16:02.094969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:25.778 pt2 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:25.778 "name": "raid_bdev1", 00:23:25.778 "uuid": "7558ea14-d830-4b50-8578-13b4b02164c8", 00:23:25.778 "strip_size_kb": 0, 00:23:25.778 "state": "online", 00:23:25.778 "raid_level": "raid1", 00:23:25.778 "superblock": true, 00:23:25.778 "num_base_bdevs": 2, 00:23:25.778 "num_base_bdevs_discovered": 1, 00:23:25.778 "num_base_bdevs_operational": 1, 00:23:25.778 "base_bdevs_list": [ 00:23:25.778 { 00:23:25.778 "name": null, 00:23:25.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.778 "is_configured": false, 00:23:25.778 "data_offset": 256, 00:23:25.778 "data_size": 7936 00:23:25.778 }, 00:23:25.778 { 00:23:25.778 "name": "pt2", 00:23:25.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:25.778 "is_configured": true, 00:23:25.778 "data_offset": 256, 00:23:25.778 "data_size": 7936 00:23:25.778 } 00:23:25.778 ] 00:23:25.778 }' 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:25.778 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:26.345 [2024-11-06 09:16:02.587010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:26.345 [2024-11-06 09:16:02.587047] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:26.345 [2024-11-06 09:16:02.587132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:26.345 [2024-11-06 09:16:02.587212] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:26.345 [2024-11-06 09:16:02.587227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.345 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:26.345 [2024-11-06 09:16:02.647017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:26.345 [2024-11-06 09:16:02.647196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.345 [2024-11-06 09:16:02.647331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:23:26.345 [2024-11-06 09:16:02.647475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.345 [2024-11-06 09:16:02.650319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.345 [2024-11-06 09:16:02.650470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:26.345 [2024-11-06 09:16:02.650680] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:26.345 [2024-11-06 09:16:02.650746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:26.345 [2024-11-06 09:16:02.650947] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:26.345 [2024-11-06 09:16:02.650966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:26.345 [2024-11-06 09:16:02.650988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:23:26.345 [2024-11-06 09:16:02.651061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:26.345 [2024-11-06 09:16:02.651211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:23:26.345 [2024-11-06 09:16:02.651228] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:26.346 pt1 00:23:26.346 [2024-11-06 09:16:02.651534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:26.346 [2024-11-06 09:16:02.651716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:23:26.346 [2024-11-06 09:16:02.651743] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:23:26.346 [2024-11-06 09:16:02.651977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:26.346 "name": "raid_bdev1", 00:23:26.346 "uuid": "7558ea14-d830-4b50-8578-13b4b02164c8", 00:23:26.346 "strip_size_kb": 0, 00:23:26.346 "state": "online", 00:23:26.346 "raid_level": "raid1", 00:23:26.346 "superblock": true, 00:23:26.346 "num_base_bdevs": 2, 00:23:26.346 "num_base_bdevs_discovered": 1, 00:23:26.346 "num_base_bdevs_operational": 1, 00:23:26.346 "base_bdevs_list": [ 00:23:26.346 { 00:23:26.346 "name": null, 00:23:26.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.346 "is_configured": false, 00:23:26.346 "data_offset": 256, 00:23:26.346 "data_size": 7936 00:23:26.346 }, 00:23:26.346 { 00:23:26.346 "name": "pt2", 00:23:26.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:26.346 "is_configured": true, 00:23:26.346 "data_offset": 256, 00:23:26.346 "data_size": 7936 00:23:26.346 } 00:23:26.346 ] 00:23:26.346 }' 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:26.346 09:16:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:26.913 [2024-11-06 09:16:03.207468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 7558ea14-d830-4b50-8578-13b4b02164c8 '!=' 7558ea14-d830-4b50-8578-13b4b02164c8 ']' 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86503 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 86503 ']' 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 86503 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86503 00:23:26.913 killing process with pid 86503 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86503' 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 86503 00:23:26.913 [2024-11-06 09:16:03.282896] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:26.913 09:16:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 86503 00:23:26.913 [2024-11-06 09:16:03.283016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:26.913 [2024-11-06 09:16:03.283079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:26.913 [2024-11-06 09:16:03.283101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:23:27.171 [2024-11-06 09:16:03.461921] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:28.106 09:16:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:23:28.106 ************************************ 00:23:28.106 END TEST raid_superblock_test_4k 00:23:28.106 ************************************ 00:23:28.106 00:23:28.106 real 0m6.580s 00:23:28.106 user 0m10.460s 00:23:28.106 sys 0m0.934s 00:23:28.106 09:16:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:28.106 09:16:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:23:28.106 09:16:04 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:23:28.106 09:16:04 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:23:28.106 09:16:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:23:28.106 09:16:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:28.106 09:16:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:28.106 ************************************ 00:23:28.106 START TEST raid_rebuild_test_sb_4k 00:23:28.106 ************************************ 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86830 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86830 00:23:28.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 86830 ']' 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:28.106 09:16:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:28.364 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:28.364 Zero copy mechanism will not be used. 00:23:28.364 [2024-11-06 09:16:04.649202] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:23:28.364 [2024-11-06 09:16:04.649376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86830 ] 00:23:28.364 [2024-11-06 09:16:04.832529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.632 [2024-11-06 09:16:04.960518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.909 [2024-11-06 09:16:05.162297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:28.909 [2024-11-06 09:16:05.162370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:29.168 BaseBdev1_malloc 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:29.168 [2024-11-06 09:16:05.687562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:29.168 [2024-11-06 09:16:05.687789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.168 [2024-11-06 09:16:05.687849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:29.168 [2024-11-06 09:16:05.687872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.168 [2024-11-06 09:16:05.690678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.168 [2024-11-06 09:16:05.690730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:29.168 BaseBdev1 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.168 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:29.427 BaseBdev2_malloc 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:29.427 [2024-11-06 09:16:05.744347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:29.427 [2024-11-06 09:16:05.744425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.427 [2024-11-06 09:16:05.744453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:29.427 [2024-11-06 09:16:05.744472] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.427 [2024-11-06 09:16:05.747227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.427 [2024-11-06 09:16:05.747407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:29.427 BaseBdev2 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:29.427 spare_malloc 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:29.427 spare_delay 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:29.427 [2024-11-06 09:16:05.825376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:29.427 [2024-11-06 09:16:05.825453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.427 [2024-11-06 09:16:05.825483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:29.427 [2024-11-06 09:16:05.825500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.427 [2024-11-06 09:16:05.828293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.427 [2024-11-06 09:16:05.828343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:29.427 spare 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:29.427 [2024-11-06 09:16:05.833476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:29.427 [2024-11-06 09:16:05.835987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:29.427 [2024-11-06 09:16:05.836281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:29.427 [2024-11-06 09:16:05.836353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:29.427 [2024-11-06 09:16:05.836799] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:29.427 [2024-11-06 09:16:05.837170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:29.427 [2024-11-06 09:16:05.837293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:29.427 [2024-11-06 09:16:05.837693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:29.427 "name": "raid_bdev1", 00:23:29.427 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:29.427 "strip_size_kb": 0, 00:23:29.427 "state": "online", 00:23:29.427 "raid_level": "raid1", 00:23:29.427 "superblock": true, 00:23:29.427 "num_base_bdevs": 2, 00:23:29.427 "num_base_bdevs_discovered": 2, 00:23:29.427 "num_base_bdevs_operational": 2, 00:23:29.427 "base_bdevs_list": [ 00:23:29.427 { 00:23:29.427 "name": "BaseBdev1", 00:23:29.427 "uuid": "47cb8c4c-e502-58f8-994d-69b3dee089ad", 00:23:29.427 "is_configured": true, 00:23:29.427 "data_offset": 256, 00:23:29.427 "data_size": 7936 00:23:29.427 }, 00:23:29.427 { 00:23:29.427 "name": "BaseBdev2", 00:23:29.427 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:29.427 "is_configured": true, 00:23:29.427 "data_offset": 256, 00:23:29.427 "data_size": 7936 00:23:29.427 } 00:23:29.427 ] 00:23:29.427 }' 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:29.427 09:16:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:23:29.994 [2024-11-06 09:16:06.334207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:29.994 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:29.995 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:29.995 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:23:29.995 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:29.995 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:29.995 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:30.253 [2024-11-06 09:16:06.721982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:30.253 /dev/nbd0 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:30.253 1+0 records in 00:23:30.253 1+0 records out 00:23:30.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038495 s, 10.6 MB/s 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:23:30.253 09:16:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:23:31.626 7936+0 records in 00:23:31.626 7936+0 records out 00:23:31.626 32505856 bytes (33 MB, 31 MiB) copied, 0.940715 s, 34.6 MB/s 00:23:31.626 09:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:23:31.626 09:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:31.626 09:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:31.626 09:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:31.626 09:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:23:31.626 09:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:31.626 09:16:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:31.626 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:31.626 [2024-11-06 09:16:08.070410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:31.626 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:31.626 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:31.626 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:31.626 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:31.626 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:31.626 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:23:31.626 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:31.627 [2024-11-06 09:16:08.087479] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:31.627 "name": "raid_bdev1", 00:23:31.627 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:31.627 "strip_size_kb": 0, 00:23:31.627 "state": "online", 00:23:31.627 "raid_level": "raid1", 00:23:31.627 "superblock": true, 00:23:31.627 "num_base_bdevs": 2, 00:23:31.627 "num_base_bdevs_discovered": 1, 00:23:31.627 "num_base_bdevs_operational": 1, 00:23:31.627 "base_bdevs_list": [ 00:23:31.627 { 00:23:31.627 "name": null, 00:23:31.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.627 "is_configured": false, 00:23:31.627 "data_offset": 0, 00:23:31.627 "data_size": 7936 00:23:31.627 }, 00:23:31.627 { 00:23:31.627 "name": "BaseBdev2", 00:23:31.627 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:31.627 "is_configured": true, 00:23:31.627 "data_offset": 256, 00:23:31.627 "data_size": 7936 00:23:31.627 } 00:23:31.627 ] 00:23:31.627 }' 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:31.627 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:32.201 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:32.201 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.201 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:32.201 [2024-11-06 09:16:08.631703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:32.201 [2024-11-06 09:16:08.649364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:23:32.201 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.201 09:16:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:23:32.201 [2024-11-06 09:16:08.652159] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:33.137 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.137 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:33.137 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:33.137 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:33.137 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:33.137 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.137 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.137 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.137 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:33.394 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.394 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:33.394 "name": "raid_bdev1", 00:23:33.394 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:33.394 "strip_size_kb": 0, 00:23:33.394 "state": "online", 00:23:33.394 "raid_level": "raid1", 00:23:33.394 "superblock": true, 00:23:33.394 "num_base_bdevs": 2, 00:23:33.394 "num_base_bdevs_discovered": 2, 00:23:33.394 "num_base_bdevs_operational": 2, 00:23:33.394 "process": { 00:23:33.394 "type": "rebuild", 00:23:33.394 "target": "spare", 00:23:33.394 "progress": { 00:23:33.394 "blocks": 2560, 00:23:33.394 "percent": 32 00:23:33.394 } 00:23:33.394 }, 00:23:33.394 "base_bdevs_list": [ 00:23:33.394 { 00:23:33.394 "name": "spare", 00:23:33.394 "uuid": "1391a55c-07f4-58da-80db-4ef07211434a", 00:23:33.394 "is_configured": true, 00:23:33.394 "data_offset": 256, 00:23:33.394 "data_size": 7936 00:23:33.394 }, 00:23:33.394 { 00:23:33.394 "name": "BaseBdev2", 00:23:33.394 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:33.394 "is_configured": true, 00:23:33.394 "data_offset": 256, 00:23:33.394 "data_size": 7936 00:23:33.394 } 00:23:33.394 ] 00:23:33.394 }' 00:23:33.394 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:33.394 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.394 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:33.394 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:33.395 [2024-11-06 09:16:09.809289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.395 [2024-11-06 09:16:09.861243] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:33.395 [2024-11-06 09:16:09.861591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.395 [2024-11-06 09:16:09.861881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.395 [2024-11-06 09:16:09.861944] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:33.395 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.653 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:33.653 "name": "raid_bdev1", 00:23:33.653 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:33.653 "strip_size_kb": 0, 00:23:33.653 "state": "online", 00:23:33.653 "raid_level": "raid1", 00:23:33.653 "superblock": true, 00:23:33.653 "num_base_bdevs": 2, 00:23:33.653 "num_base_bdevs_discovered": 1, 00:23:33.653 "num_base_bdevs_operational": 1, 00:23:33.653 "base_bdevs_list": [ 00:23:33.653 { 00:23:33.653 "name": null, 00:23:33.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.653 "is_configured": false, 00:23:33.653 "data_offset": 0, 00:23:33.653 "data_size": 7936 00:23:33.653 }, 00:23:33.653 { 00:23:33.653 "name": "BaseBdev2", 00:23:33.653 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:33.653 "is_configured": true, 00:23:33.653 "data_offset": 256, 00:23:33.653 "data_size": 7936 00:23:33.653 } 00:23:33.653 ] 00:23:33.653 }' 00:23:33.653 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:33.653 09:16:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:33.940 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:33.940 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:33.940 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:33.940 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:33.940 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:33.940 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:33.940 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.940 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.940 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:33.940 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.940 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:33.940 "name": "raid_bdev1", 00:23:33.940 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:33.940 "strip_size_kb": 0, 00:23:33.940 "state": "online", 00:23:33.940 "raid_level": "raid1", 00:23:33.940 "superblock": true, 00:23:33.940 "num_base_bdevs": 2, 00:23:33.940 "num_base_bdevs_discovered": 1, 00:23:33.940 "num_base_bdevs_operational": 1, 00:23:33.940 "base_bdevs_list": [ 00:23:33.940 { 00:23:33.940 "name": null, 00:23:33.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.940 "is_configured": false, 00:23:33.940 "data_offset": 0, 00:23:33.940 "data_size": 7936 00:23:33.940 }, 00:23:33.940 { 00:23:33.940 "name": "BaseBdev2", 00:23:33.940 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:33.940 "is_configured": true, 00:23:33.940 "data_offset": 256, 00:23:33.941 "data_size": 7936 00:23:33.941 } 00:23:33.941 ] 00:23:33.941 }' 00:23:33.941 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:34.198 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:34.198 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:34.198 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:34.198 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:34.198 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.198 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:34.198 [2024-11-06 09:16:10.578429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:34.198 [2024-11-06 09:16:10.594300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:23:34.198 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.198 09:16:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:23:34.198 [2024-11-06 09:16:10.596927] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:35.131 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.131 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:35.131 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:35.131 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:35.131 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:35.131 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.131 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.132 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.132 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:35.132 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.132 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:35.132 "name": "raid_bdev1", 00:23:35.132 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:35.132 "strip_size_kb": 0, 00:23:35.132 "state": "online", 00:23:35.132 "raid_level": "raid1", 00:23:35.132 "superblock": true, 00:23:35.132 "num_base_bdevs": 2, 00:23:35.132 "num_base_bdevs_discovered": 2, 00:23:35.132 "num_base_bdevs_operational": 2, 00:23:35.132 "process": { 00:23:35.132 "type": "rebuild", 00:23:35.132 "target": "spare", 00:23:35.132 "progress": { 00:23:35.132 "blocks": 2560, 00:23:35.132 "percent": 32 00:23:35.132 } 00:23:35.132 }, 00:23:35.132 "base_bdevs_list": [ 00:23:35.132 { 00:23:35.132 "name": "spare", 00:23:35.132 "uuid": "1391a55c-07f4-58da-80db-4ef07211434a", 00:23:35.132 "is_configured": true, 00:23:35.132 "data_offset": 256, 00:23:35.132 "data_size": 7936 00:23:35.132 }, 00:23:35.132 { 00:23:35.132 "name": "BaseBdev2", 00:23:35.132 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:35.132 "is_configured": true, 00:23:35.132 "data_offset": 256, 00:23:35.132 "data_size": 7936 00:23:35.132 } 00:23:35.132 ] 00:23:35.132 }' 00:23:35.132 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:23:35.390 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=731 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:35.390 "name": "raid_bdev1", 00:23:35.390 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:35.390 "strip_size_kb": 0, 00:23:35.390 "state": "online", 00:23:35.390 "raid_level": "raid1", 00:23:35.390 "superblock": true, 00:23:35.390 "num_base_bdevs": 2, 00:23:35.390 "num_base_bdevs_discovered": 2, 00:23:35.390 "num_base_bdevs_operational": 2, 00:23:35.390 "process": { 00:23:35.390 "type": "rebuild", 00:23:35.390 "target": "spare", 00:23:35.390 "progress": { 00:23:35.390 "blocks": 2816, 00:23:35.390 "percent": 35 00:23:35.390 } 00:23:35.390 }, 00:23:35.390 "base_bdevs_list": [ 00:23:35.390 { 00:23:35.390 "name": "spare", 00:23:35.390 "uuid": "1391a55c-07f4-58da-80db-4ef07211434a", 00:23:35.390 "is_configured": true, 00:23:35.390 "data_offset": 256, 00:23:35.390 "data_size": 7936 00:23:35.390 }, 00:23:35.390 { 00:23:35.390 "name": "BaseBdev2", 00:23:35.390 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:35.390 "is_configured": true, 00:23:35.390 "data_offset": 256, 00:23:35.390 "data_size": 7936 00:23:35.390 } 00:23:35.390 ] 00:23:35.390 }' 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:35.390 09:16:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:36.766 "name": "raid_bdev1", 00:23:36.766 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:36.766 "strip_size_kb": 0, 00:23:36.766 "state": "online", 00:23:36.766 "raid_level": "raid1", 00:23:36.766 "superblock": true, 00:23:36.766 "num_base_bdevs": 2, 00:23:36.766 "num_base_bdevs_discovered": 2, 00:23:36.766 "num_base_bdevs_operational": 2, 00:23:36.766 "process": { 00:23:36.766 "type": "rebuild", 00:23:36.766 "target": "spare", 00:23:36.766 "progress": { 00:23:36.766 "blocks": 5888, 00:23:36.766 "percent": 74 00:23:36.766 } 00:23:36.766 }, 00:23:36.766 "base_bdevs_list": [ 00:23:36.766 { 00:23:36.766 "name": "spare", 00:23:36.766 "uuid": "1391a55c-07f4-58da-80db-4ef07211434a", 00:23:36.766 "is_configured": true, 00:23:36.766 "data_offset": 256, 00:23:36.766 "data_size": 7936 00:23:36.766 }, 00:23:36.766 { 00:23:36.766 "name": "BaseBdev2", 00:23:36.766 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:36.766 "is_configured": true, 00:23:36.766 "data_offset": 256, 00:23:36.766 "data_size": 7936 00:23:36.766 } 00:23:36.766 ] 00:23:36.766 }' 00:23:36.766 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:36.767 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.767 09:16:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:36.767 09:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.767 09:16:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:37.365 [2024-11-06 09:16:13.719159] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:37.365 [2024-11-06 09:16:13.719474] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:37.365 [2024-11-06 09:16:13.719658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:37.623 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:37.623 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:37.623 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:37.623 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:37.623 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:37.623 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:37.623 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.623 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.623 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.623 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:37.623 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.623 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:37.623 "name": "raid_bdev1", 00:23:37.623 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:37.623 "strip_size_kb": 0, 00:23:37.623 "state": "online", 00:23:37.623 "raid_level": "raid1", 00:23:37.623 "superblock": true, 00:23:37.623 "num_base_bdevs": 2, 00:23:37.623 "num_base_bdevs_discovered": 2, 00:23:37.624 "num_base_bdevs_operational": 2, 00:23:37.624 "base_bdevs_list": [ 00:23:37.624 { 00:23:37.624 "name": "spare", 00:23:37.624 "uuid": "1391a55c-07f4-58da-80db-4ef07211434a", 00:23:37.624 "is_configured": true, 00:23:37.624 "data_offset": 256, 00:23:37.624 "data_size": 7936 00:23:37.624 }, 00:23:37.624 { 00:23:37.624 "name": "BaseBdev2", 00:23:37.624 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:37.624 "is_configured": true, 00:23:37.624 "data_offset": 256, 00:23:37.624 "data_size": 7936 00:23:37.624 } 00:23:37.624 ] 00:23:37.624 }' 00:23:37.624 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:37.624 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:37.624 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:37.883 "name": "raid_bdev1", 00:23:37.883 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:37.883 "strip_size_kb": 0, 00:23:37.883 "state": "online", 00:23:37.883 "raid_level": "raid1", 00:23:37.883 "superblock": true, 00:23:37.883 "num_base_bdevs": 2, 00:23:37.883 "num_base_bdevs_discovered": 2, 00:23:37.883 "num_base_bdevs_operational": 2, 00:23:37.883 "base_bdevs_list": [ 00:23:37.883 { 00:23:37.883 "name": "spare", 00:23:37.883 "uuid": "1391a55c-07f4-58da-80db-4ef07211434a", 00:23:37.883 "is_configured": true, 00:23:37.883 "data_offset": 256, 00:23:37.883 "data_size": 7936 00:23:37.883 }, 00:23:37.883 { 00:23:37.883 "name": "BaseBdev2", 00:23:37.883 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:37.883 "is_configured": true, 00:23:37.883 "data_offset": 256, 00:23:37.883 "data_size": 7936 00:23:37.883 } 00:23:37.883 ] 00:23:37.883 }' 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:37.883 "name": "raid_bdev1", 00:23:37.883 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:37.883 "strip_size_kb": 0, 00:23:37.883 "state": "online", 00:23:37.883 "raid_level": "raid1", 00:23:37.883 "superblock": true, 00:23:37.883 "num_base_bdevs": 2, 00:23:37.883 "num_base_bdevs_discovered": 2, 00:23:37.883 "num_base_bdevs_operational": 2, 00:23:37.883 "base_bdevs_list": [ 00:23:37.883 { 00:23:37.883 "name": "spare", 00:23:37.883 "uuid": "1391a55c-07f4-58da-80db-4ef07211434a", 00:23:37.883 "is_configured": true, 00:23:37.883 "data_offset": 256, 00:23:37.883 "data_size": 7936 00:23:37.883 }, 00:23:37.883 { 00:23:37.883 "name": "BaseBdev2", 00:23:37.883 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:37.883 "is_configured": true, 00:23:37.883 "data_offset": 256, 00:23:37.883 "data_size": 7936 00:23:37.883 } 00:23:37.883 ] 00:23:37.883 }' 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:37.883 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:38.448 [2024-11-06 09:16:14.851826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:38.448 [2024-11-06 09:16:14.852001] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:38.448 [2024-11-06 09:16:14.852205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:38.448 [2024-11-06 09:16:14.852410] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:38.448 [2024-11-06 09:16:14.852557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:38.448 09:16:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:38.706 /dev/nbd0 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:38.706 1+0 records in 00:23:38.706 1+0 records out 00:23:38.706 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245079 s, 16.7 MB/s 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:38.706 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:23:39.272 /dev/nbd1 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:39.272 1+0 records in 00:23:39.272 1+0 records out 00:23:39.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362494 s, 11.3 MB/s 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:39.272 09:16:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:23:39.532 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:39.790 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:39.790 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:39.790 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:39.790 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:39.790 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:39.790 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:23:39.790 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:23:39.790 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:39.790 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:40.051 [2024-11-06 09:16:16.367637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:40.051 [2024-11-06 09:16:16.367701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:40.051 [2024-11-06 09:16:16.367749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:40.051 [2024-11-06 09:16:16.367763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:40.051 [2024-11-06 09:16:16.370723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:40.051 [2024-11-06 09:16:16.370770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:40.051 [2024-11-06 09:16:16.370913] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:40.051 [2024-11-06 09:16:16.370987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:40.051 [2024-11-06 09:16:16.371207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:40.051 spare 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.051 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:40.051 [2024-11-06 09:16:16.471350] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:40.051 [2024-11-06 09:16:16.471612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:40.051 [2024-11-06 09:16:16.472046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:23:40.051 [2024-11-06 09:16:16.472331] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:40.051 [2024-11-06 09:16:16.472356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:40.052 [2024-11-06 09:16:16.472618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:40.052 "name": "raid_bdev1", 00:23:40.052 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:40.052 "strip_size_kb": 0, 00:23:40.052 "state": "online", 00:23:40.052 "raid_level": "raid1", 00:23:40.052 "superblock": true, 00:23:40.052 "num_base_bdevs": 2, 00:23:40.052 "num_base_bdevs_discovered": 2, 00:23:40.052 "num_base_bdevs_operational": 2, 00:23:40.052 "base_bdevs_list": [ 00:23:40.052 { 00:23:40.052 "name": "spare", 00:23:40.052 "uuid": "1391a55c-07f4-58da-80db-4ef07211434a", 00:23:40.052 "is_configured": true, 00:23:40.052 "data_offset": 256, 00:23:40.052 "data_size": 7936 00:23:40.052 }, 00:23:40.052 { 00:23:40.052 "name": "BaseBdev2", 00:23:40.052 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:40.052 "is_configured": true, 00:23:40.052 "data_offset": 256, 00:23:40.052 "data_size": 7936 00:23:40.052 } 00:23:40.052 ] 00:23:40.052 }' 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:40.052 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:40.623 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:40.623 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:40.623 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:40.623 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:40.623 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:40.623 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.623 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.623 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.623 09:16:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:40.623 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.623 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:40.623 "name": "raid_bdev1", 00:23:40.623 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:40.623 "strip_size_kb": 0, 00:23:40.623 "state": "online", 00:23:40.623 "raid_level": "raid1", 00:23:40.623 "superblock": true, 00:23:40.624 "num_base_bdevs": 2, 00:23:40.624 "num_base_bdevs_discovered": 2, 00:23:40.624 "num_base_bdevs_operational": 2, 00:23:40.624 "base_bdevs_list": [ 00:23:40.624 { 00:23:40.624 "name": "spare", 00:23:40.624 "uuid": "1391a55c-07f4-58da-80db-4ef07211434a", 00:23:40.624 "is_configured": true, 00:23:40.624 "data_offset": 256, 00:23:40.624 "data_size": 7936 00:23:40.624 }, 00:23:40.624 { 00:23:40.624 "name": "BaseBdev2", 00:23:40.624 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:40.624 "is_configured": true, 00:23:40.624 "data_offset": 256, 00:23:40.624 "data_size": 7936 00:23:40.624 } 00:23:40.624 ] 00:23:40.624 }' 00:23:40.624 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:40.624 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:40.624 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:40.624 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:40.624 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.624 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.624 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:40.624 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:40.624 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:40.882 [2024-11-06 09:16:17.200759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:40.882 "name": "raid_bdev1", 00:23:40.882 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:40.882 "strip_size_kb": 0, 00:23:40.882 "state": "online", 00:23:40.882 "raid_level": "raid1", 00:23:40.882 "superblock": true, 00:23:40.882 "num_base_bdevs": 2, 00:23:40.882 "num_base_bdevs_discovered": 1, 00:23:40.882 "num_base_bdevs_operational": 1, 00:23:40.882 "base_bdevs_list": [ 00:23:40.882 { 00:23:40.882 "name": null, 00:23:40.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.882 "is_configured": false, 00:23:40.882 "data_offset": 0, 00:23:40.882 "data_size": 7936 00:23:40.882 }, 00:23:40.882 { 00:23:40.882 "name": "BaseBdev2", 00:23:40.882 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:40.882 "is_configured": true, 00:23:40.882 "data_offset": 256, 00:23:40.882 "data_size": 7936 00:23:40.882 } 00:23:40.882 ] 00:23:40.882 }' 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:40.882 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:41.447 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:41.447 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.447 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:41.447 [2024-11-06 09:16:17.741061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:41.447 [2024-11-06 09:16:17.741415] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:41.447 [2024-11-06 09:16:17.741457] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:41.447 [2024-11-06 09:16:17.741536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:41.447 [2024-11-06 09:16:17.761753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:23:41.447 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.447 09:16:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:41.447 [2024-11-06 09:16:17.765106] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:42.380 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:42.380 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:42.381 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:42.381 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:42.381 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:42.381 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.381 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.381 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.381 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:42.381 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.381 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:42.381 "name": "raid_bdev1", 00:23:42.381 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:42.381 "strip_size_kb": 0, 00:23:42.381 "state": "online", 00:23:42.381 "raid_level": "raid1", 00:23:42.381 "superblock": true, 00:23:42.381 "num_base_bdevs": 2, 00:23:42.381 "num_base_bdevs_discovered": 2, 00:23:42.381 "num_base_bdevs_operational": 2, 00:23:42.381 "process": { 00:23:42.381 "type": "rebuild", 00:23:42.381 "target": "spare", 00:23:42.381 "progress": { 00:23:42.381 "blocks": 2560, 00:23:42.381 "percent": 32 00:23:42.381 } 00:23:42.381 }, 00:23:42.381 "base_bdevs_list": [ 00:23:42.381 { 00:23:42.381 "name": "spare", 00:23:42.381 "uuid": "1391a55c-07f4-58da-80db-4ef07211434a", 00:23:42.381 "is_configured": true, 00:23:42.381 "data_offset": 256, 00:23:42.381 "data_size": 7936 00:23:42.381 }, 00:23:42.381 { 00:23:42.381 "name": "BaseBdev2", 00:23:42.381 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:42.381 "is_configured": true, 00:23:42.381 "data_offset": 256, 00:23:42.381 "data_size": 7936 00:23:42.381 } 00:23:42.381 ] 00:23:42.381 }' 00:23:42.381 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:42.381 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.381 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:42.638 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.638 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:42.638 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.639 09:16:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:42.639 [2024-11-06 09:16:18.930631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:42.639 [2024-11-06 09:16:18.973617] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:42.639 [2024-11-06 09:16:18.973698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:42.639 [2024-11-06 09:16:18.973722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:42.639 [2024-11-06 09:16:18.973737] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:42.639 "name": "raid_bdev1", 00:23:42.639 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:42.639 "strip_size_kb": 0, 00:23:42.639 "state": "online", 00:23:42.639 "raid_level": "raid1", 00:23:42.639 "superblock": true, 00:23:42.639 "num_base_bdevs": 2, 00:23:42.639 "num_base_bdevs_discovered": 1, 00:23:42.639 "num_base_bdevs_operational": 1, 00:23:42.639 "base_bdevs_list": [ 00:23:42.639 { 00:23:42.639 "name": null, 00:23:42.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.639 "is_configured": false, 00:23:42.639 "data_offset": 0, 00:23:42.639 "data_size": 7936 00:23:42.639 }, 00:23:42.639 { 00:23:42.639 "name": "BaseBdev2", 00:23:42.639 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:42.639 "is_configured": true, 00:23:42.639 "data_offset": 256, 00:23:42.639 "data_size": 7936 00:23:42.639 } 00:23:42.639 ] 00:23:42.639 }' 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:42.639 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:43.204 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:43.204 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.204 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:43.204 [2024-11-06 09:16:19.557216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:43.204 [2024-11-06 09:16:19.557434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.204 [2024-11-06 09:16:19.557506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:43.204 [2024-11-06 09:16:19.557730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.204 [2024-11-06 09:16:19.558390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.204 [2024-11-06 09:16:19.558555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:43.204 [2024-11-06 09:16:19.558703] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:43.204 [2024-11-06 09:16:19.558730] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:43.204 [2024-11-06 09:16:19.558745] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:43.204 [2024-11-06 09:16:19.558782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:43.204 [2024-11-06 09:16:19.573966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:23:43.204 spare 00:23:43.205 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.205 09:16:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:43.205 [2024-11-06 09:16:19.576399] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:44.138 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.138 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:44.138 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:44.138 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:44.138 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:44.138 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.138 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.138 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.138 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:44.138 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.138 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:44.138 "name": "raid_bdev1", 00:23:44.138 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:44.138 "strip_size_kb": 0, 00:23:44.138 "state": "online", 00:23:44.138 "raid_level": "raid1", 00:23:44.138 "superblock": true, 00:23:44.138 "num_base_bdevs": 2, 00:23:44.138 "num_base_bdevs_discovered": 2, 00:23:44.138 "num_base_bdevs_operational": 2, 00:23:44.138 "process": { 00:23:44.138 "type": "rebuild", 00:23:44.138 "target": "spare", 00:23:44.138 "progress": { 00:23:44.138 "blocks": 2560, 00:23:44.138 "percent": 32 00:23:44.138 } 00:23:44.138 }, 00:23:44.138 "base_bdevs_list": [ 00:23:44.138 { 00:23:44.138 "name": "spare", 00:23:44.138 "uuid": "1391a55c-07f4-58da-80db-4ef07211434a", 00:23:44.138 "is_configured": true, 00:23:44.138 "data_offset": 256, 00:23:44.138 "data_size": 7936 00:23:44.138 }, 00:23:44.138 { 00:23:44.138 "name": "BaseBdev2", 00:23:44.138 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:44.138 "is_configured": true, 00:23:44.138 "data_offset": 256, 00:23:44.138 "data_size": 7936 00:23:44.138 } 00:23:44.138 ] 00:23:44.138 }' 00:23:44.138 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:44.396 [2024-11-06 09:16:20.733704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:44.396 [2024-11-06 09:16:20.784738] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:44.396 [2024-11-06 09:16:20.784833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.396 [2024-11-06 09:16:20.784864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:44.396 [2024-11-06 09:16:20.784876] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.396 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:44.396 "name": "raid_bdev1", 00:23:44.396 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:44.396 "strip_size_kb": 0, 00:23:44.396 "state": "online", 00:23:44.396 "raid_level": "raid1", 00:23:44.396 "superblock": true, 00:23:44.396 "num_base_bdevs": 2, 00:23:44.396 "num_base_bdevs_discovered": 1, 00:23:44.396 "num_base_bdevs_operational": 1, 00:23:44.396 "base_bdevs_list": [ 00:23:44.396 { 00:23:44.396 "name": null, 00:23:44.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.396 "is_configured": false, 00:23:44.396 "data_offset": 0, 00:23:44.396 "data_size": 7936 00:23:44.396 }, 00:23:44.396 { 00:23:44.396 "name": "BaseBdev2", 00:23:44.396 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:44.396 "is_configured": true, 00:23:44.396 "data_offset": 256, 00:23:44.396 "data_size": 7936 00:23:44.396 } 00:23:44.396 ] 00:23:44.396 }' 00:23:44.397 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:44.397 09:16:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:44.963 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:44.963 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:44.963 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:44.963 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:44.963 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:44.963 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:44.963 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.963 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.963 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:44.963 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.963 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:44.963 "name": "raid_bdev1", 00:23:44.963 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:44.963 "strip_size_kb": 0, 00:23:44.963 "state": "online", 00:23:44.963 "raid_level": "raid1", 00:23:44.963 "superblock": true, 00:23:44.963 "num_base_bdevs": 2, 00:23:44.963 "num_base_bdevs_discovered": 1, 00:23:44.963 "num_base_bdevs_operational": 1, 00:23:44.963 "base_bdevs_list": [ 00:23:44.963 { 00:23:44.963 "name": null, 00:23:44.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.963 "is_configured": false, 00:23:44.963 "data_offset": 0, 00:23:44.963 "data_size": 7936 00:23:44.963 }, 00:23:44.963 { 00:23:44.963 "name": "BaseBdev2", 00:23:44.963 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:44.964 "is_configured": true, 00:23:44.964 "data_offset": 256, 00:23:44.964 "data_size": 7936 00:23:44.964 } 00:23:44.964 ] 00:23:44.964 }' 00:23:44.964 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:44.964 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:44.964 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:44.964 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:44.964 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:44.964 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.964 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:44.964 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.964 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:44.964 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.964 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:44.964 [2024-11-06 09:16:21.496212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:44.964 [2024-11-06 09:16:21.496273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.964 [2024-11-06 09:16:21.496305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:44.964 [2024-11-06 09:16:21.496330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.964 [2024-11-06 09:16:21.496893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.964 [2024-11-06 09:16:21.496919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:44.964 [2024-11-06 09:16:21.497018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:44.964 [2024-11-06 09:16:21.497040] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:44.964 [2024-11-06 09:16:21.497056] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:44.964 [2024-11-06 09:16:21.497069] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:45.221 BaseBdev1 00:23:45.221 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.221 09:16:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:46.205 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:46.205 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:46.205 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:46.206 "name": "raid_bdev1", 00:23:46.206 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:46.206 "strip_size_kb": 0, 00:23:46.206 "state": "online", 00:23:46.206 "raid_level": "raid1", 00:23:46.206 "superblock": true, 00:23:46.206 "num_base_bdevs": 2, 00:23:46.206 "num_base_bdevs_discovered": 1, 00:23:46.206 "num_base_bdevs_operational": 1, 00:23:46.206 "base_bdevs_list": [ 00:23:46.206 { 00:23:46.206 "name": null, 00:23:46.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.206 "is_configured": false, 00:23:46.206 "data_offset": 0, 00:23:46.206 "data_size": 7936 00:23:46.206 }, 00:23:46.206 { 00:23:46.206 "name": "BaseBdev2", 00:23:46.206 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:46.206 "is_configured": true, 00:23:46.206 "data_offset": 256, 00:23:46.206 "data_size": 7936 00:23:46.206 } 00:23:46.206 ] 00:23:46.206 }' 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:46.206 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:46.464 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:46.464 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:46.464 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:46.464 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:46.464 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:46.464 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:46.464 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.464 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:46.464 09:16:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:46.723 "name": "raid_bdev1", 00:23:46.723 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:46.723 "strip_size_kb": 0, 00:23:46.723 "state": "online", 00:23:46.723 "raid_level": "raid1", 00:23:46.723 "superblock": true, 00:23:46.723 "num_base_bdevs": 2, 00:23:46.723 "num_base_bdevs_discovered": 1, 00:23:46.723 "num_base_bdevs_operational": 1, 00:23:46.723 "base_bdevs_list": [ 00:23:46.723 { 00:23:46.723 "name": null, 00:23:46.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:46.723 "is_configured": false, 00:23:46.723 "data_offset": 0, 00:23:46.723 "data_size": 7936 00:23:46.723 }, 00:23:46.723 { 00:23:46.723 "name": "BaseBdev2", 00:23:46.723 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:46.723 "is_configured": true, 00:23:46.723 "data_offset": 256, 00:23:46.723 "data_size": 7936 00:23:46.723 } 00:23:46.723 ] 00:23:46.723 }' 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:46.723 [2024-11-06 09:16:23.156730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:46.723 [2024-11-06 09:16:23.156943] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:46.723 [2024-11-06 09:16:23.156971] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:46.723 request: 00:23:46.723 { 00:23:46.723 "base_bdev": "BaseBdev1", 00:23:46.723 "raid_bdev": "raid_bdev1", 00:23:46.723 "method": "bdev_raid_add_base_bdev", 00:23:46.723 "req_id": 1 00:23:46.723 } 00:23:46.723 Got JSON-RPC error response 00:23:46.723 response: 00:23:46.723 { 00:23:46.723 "code": -22, 00:23:46.723 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:46.723 } 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:46.723 09:16:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:47.657 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.916 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:47.916 "name": "raid_bdev1", 00:23:47.916 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:47.916 "strip_size_kb": 0, 00:23:47.916 "state": "online", 00:23:47.916 "raid_level": "raid1", 00:23:47.916 "superblock": true, 00:23:47.916 "num_base_bdevs": 2, 00:23:47.916 "num_base_bdevs_discovered": 1, 00:23:47.916 "num_base_bdevs_operational": 1, 00:23:47.916 "base_bdevs_list": [ 00:23:47.916 { 00:23:47.916 "name": null, 00:23:47.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.916 "is_configured": false, 00:23:47.916 "data_offset": 0, 00:23:47.916 "data_size": 7936 00:23:47.916 }, 00:23:47.916 { 00:23:47.916 "name": "BaseBdev2", 00:23:47.916 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:47.916 "is_configured": true, 00:23:47.916 "data_offset": 256, 00:23:47.916 "data_size": 7936 00:23:47.916 } 00:23:47.916 ] 00:23:47.916 }' 00:23:47.916 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:47.916 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:48.174 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:48.174 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:48.174 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:48.174 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:48.174 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:48.174 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:48.174 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.174 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.174 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:48.174 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:48.433 "name": "raid_bdev1", 00:23:48.433 "uuid": "7982c632-2059-4702-8897-062ba15a7acd", 00:23:48.433 "strip_size_kb": 0, 00:23:48.433 "state": "online", 00:23:48.433 "raid_level": "raid1", 00:23:48.433 "superblock": true, 00:23:48.433 "num_base_bdevs": 2, 00:23:48.433 "num_base_bdevs_discovered": 1, 00:23:48.433 "num_base_bdevs_operational": 1, 00:23:48.433 "base_bdevs_list": [ 00:23:48.433 { 00:23:48.433 "name": null, 00:23:48.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.433 "is_configured": false, 00:23:48.433 "data_offset": 0, 00:23:48.433 "data_size": 7936 00:23:48.433 }, 00:23:48.433 { 00:23:48.433 "name": "BaseBdev2", 00:23:48.433 "uuid": "d0f04a22-9a40-5389-bf5e-f86d339e4a8b", 00:23:48.433 "is_configured": true, 00:23:48.433 "data_offset": 256, 00:23:48.433 "data_size": 7936 00:23:48.433 } 00:23:48.433 ] 00:23:48.433 }' 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86830 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 86830 ']' 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 86830 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86830 00:23:48.433 killing process with pid 86830 00:23:48.433 Received shutdown signal, test time was about 60.000000 seconds 00:23:48.433 00:23:48.433 Latency(us) 00:23:48.433 [2024-11-06T09:16:24.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.433 [2024-11-06T09:16:24.968Z] =================================================================================================================== 00:23:48.433 [2024-11-06T09:16:24.968Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86830' 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 86830 00:23:48.433 [2024-11-06 09:16:24.865627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:48.433 09:16:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 86830 00:23:48.433 [2024-11-06 09:16:24.865780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:48.433 [2024-11-06 09:16:24.865892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:48.433 [2024-11-06 09:16:24.865916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:48.703 [2024-11-06 09:16:25.143125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:49.637 09:16:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:23:49.637 00:23:49.637 real 0m21.625s 00:23:49.637 user 0m29.302s 00:23:49.637 sys 0m2.529s 00:23:49.637 ************************************ 00:23:49.637 END TEST raid_rebuild_test_sb_4k 00:23:49.637 ************************************ 00:23:49.637 09:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:49.637 09:16:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:23:49.896 09:16:26 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:23:49.896 09:16:26 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:23:49.896 09:16:26 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:49.896 09:16:26 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:49.896 09:16:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:49.896 ************************************ 00:23:49.896 START TEST raid_state_function_test_sb_md_separate 00:23:49.896 ************************************ 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:23:49.896 Process raid pid: 87535 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87535 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87535' 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87535 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87535 ']' 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:49.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:49.896 09:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:49.896 [2024-11-06 09:16:26.305046] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:23:49.896 [2024-11-06 09:16:26.305384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.154 [2024-11-06 09:16:26.481686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.154 [2024-11-06 09:16:26.615750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.412 [2024-11-06 09:16:26.822263] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:50.412 [2024-11-06 09:16:26.822501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:50.978 [2024-11-06 09:16:27.361076] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:50.978 [2024-11-06 09:16:27.361141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:50.978 [2024-11-06 09:16:27.361159] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:50.978 [2024-11-06 09:16:27.361177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:50.978 "name": "Existed_Raid", 00:23:50.978 "uuid": "ff772642-1f27-4a5b-ad51-f40b5ace9827", 00:23:50.978 "strip_size_kb": 0, 00:23:50.978 "state": "configuring", 00:23:50.978 "raid_level": "raid1", 00:23:50.978 "superblock": true, 00:23:50.978 "num_base_bdevs": 2, 00:23:50.978 "num_base_bdevs_discovered": 0, 00:23:50.978 "num_base_bdevs_operational": 2, 00:23:50.978 "base_bdevs_list": [ 00:23:50.978 { 00:23:50.978 "name": "BaseBdev1", 00:23:50.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.978 "is_configured": false, 00:23:50.978 "data_offset": 0, 00:23:50.978 "data_size": 0 00:23:50.978 }, 00:23:50.978 { 00:23:50.978 "name": "BaseBdev2", 00:23:50.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:50.978 "is_configured": false, 00:23:50.978 "data_offset": 0, 00:23:50.978 "data_size": 0 00:23:50.978 } 00:23:50.978 ] 00:23:50.978 }' 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:50.978 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:51.544 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:51.544 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.544 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:51.544 [2024-11-06 09:16:27.873144] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:51.544 [2024-11-06 09:16:27.873190] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:51.544 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.544 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:51.544 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.544 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:51.544 [2024-11-06 09:16:27.881130] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:51.544 [2024-11-06 09:16:27.881297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:51.544 [2024-11-06 09:16:27.881425] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:51.544 [2024-11-06 09:16:27.881490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:51.544 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.544 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:23:51.544 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:51.545 [2024-11-06 09:16:27.930991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:51.545 BaseBdev1 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:51.545 [ 00:23:51.545 { 00:23:51.545 "name": "BaseBdev1", 00:23:51.545 "aliases": [ 00:23:51.545 "8a2e7c60-1e8c-4b69-ac61-1d9116b8aec2" 00:23:51.545 ], 00:23:51.545 "product_name": "Malloc disk", 00:23:51.545 "block_size": 4096, 00:23:51.545 "num_blocks": 8192, 00:23:51.545 "uuid": "8a2e7c60-1e8c-4b69-ac61-1d9116b8aec2", 00:23:51.545 "md_size": 32, 00:23:51.545 "md_interleave": false, 00:23:51.545 "dif_type": 0, 00:23:51.545 "assigned_rate_limits": { 00:23:51.545 "rw_ios_per_sec": 0, 00:23:51.545 "rw_mbytes_per_sec": 0, 00:23:51.545 "r_mbytes_per_sec": 0, 00:23:51.545 "w_mbytes_per_sec": 0 00:23:51.545 }, 00:23:51.545 "claimed": true, 00:23:51.545 "claim_type": "exclusive_write", 00:23:51.545 "zoned": false, 00:23:51.545 "supported_io_types": { 00:23:51.545 "read": true, 00:23:51.545 "write": true, 00:23:51.545 "unmap": true, 00:23:51.545 "flush": true, 00:23:51.545 "reset": true, 00:23:51.545 "nvme_admin": false, 00:23:51.545 "nvme_io": false, 00:23:51.545 "nvme_io_md": false, 00:23:51.545 "write_zeroes": true, 00:23:51.545 "zcopy": true, 00:23:51.545 "get_zone_info": false, 00:23:51.545 "zone_management": false, 00:23:51.545 "zone_append": false, 00:23:51.545 "compare": false, 00:23:51.545 "compare_and_write": false, 00:23:51.545 "abort": true, 00:23:51.545 "seek_hole": false, 00:23:51.545 "seek_data": false, 00:23:51.545 "copy": true, 00:23:51.545 "nvme_iov_md": false 00:23:51.545 }, 00:23:51.545 "memory_domains": [ 00:23:51.545 { 00:23:51.545 "dma_device_id": "system", 00:23:51.545 "dma_device_type": 1 00:23:51.545 }, 00:23:51.545 { 00:23:51.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.545 "dma_device_type": 2 00:23:51.545 } 00:23:51.545 ], 00:23:51.545 "driver_specific": {} 00:23:51.545 } 00:23:51.545 ] 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:51.545 09:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.545 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:51.545 "name": "Existed_Raid", 00:23:51.545 "uuid": "d857cc6f-8774-4eb1-8447-407f0d21c368", 00:23:51.545 "strip_size_kb": 0, 00:23:51.545 "state": "configuring", 00:23:51.545 "raid_level": "raid1", 00:23:51.545 "superblock": true, 00:23:51.545 "num_base_bdevs": 2, 00:23:51.545 "num_base_bdevs_discovered": 1, 00:23:51.545 "num_base_bdevs_operational": 2, 00:23:51.545 "base_bdevs_list": [ 00:23:51.545 { 00:23:51.545 "name": "BaseBdev1", 00:23:51.545 "uuid": "8a2e7c60-1e8c-4b69-ac61-1d9116b8aec2", 00:23:51.545 "is_configured": true, 00:23:51.545 "data_offset": 256, 00:23:51.545 "data_size": 7936 00:23:51.545 }, 00:23:51.545 { 00:23:51.545 "name": "BaseBdev2", 00:23:51.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.545 "is_configured": false, 00:23:51.545 "data_offset": 0, 00:23:51.545 "data_size": 0 00:23:51.545 } 00:23:51.545 ] 00:23:51.545 }' 00:23:51.545 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:51.545 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:52.127 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:23:52.127 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.127 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:52.127 [2024-11-06 09:16:28.491250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:52.127 [2024-11-06 09:16:28.491313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:52.127 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.127 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:23:52.127 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.127 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:52.127 [2024-11-06 09:16:28.499264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:52.128 [2024-11-06 09:16:28.501671] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:52.128 [2024-11-06 09:16:28.501729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:52.128 "name": "Existed_Raid", 00:23:52.128 "uuid": "86543e11-8fb1-4823-a39b-45c4f86dc0e7", 00:23:52.128 "strip_size_kb": 0, 00:23:52.128 "state": "configuring", 00:23:52.128 "raid_level": "raid1", 00:23:52.128 "superblock": true, 00:23:52.128 "num_base_bdevs": 2, 00:23:52.128 "num_base_bdevs_discovered": 1, 00:23:52.128 "num_base_bdevs_operational": 2, 00:23:52.128 "base_bdevs_list": [ 00:23:52.128 { 00:23:52.128 "name": "BaseBdev1", 00:23:52.128 "uuid": "8a2e7c60-1e8c-4b69-ac61-1d9116b8aec2", 00:23:52.128 "is_configured": true, 00:23:52.128 "data_offset": 256, 00:23:52.128 "data_size": 7936 00:23:52.128 }, 00:23:52.128 { 00:23:52.128 "name": "BaseBdev2", 00:23:52.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.128 "is_configured": false, 00:23:52.128 "data_offset": 0, 00:23:52.128 "data_size": 0 00:23:52.128 } 00:23:52.128 ] 00:23:52.128 }' 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:52.128 09:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:52.694 [2024-11-06 09:16:29.079291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:52.694 [2024-11-06 09:16:29.079589] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:52.694 [2024-11-06 09:16:29.079610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:52.694 [2024-11-06 09:16:29.079710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:52.694 [2024-11-06 09:16:29.079891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:52.694 [2024-11-06 09:16:29.079911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:52.694 BaseBdev2 00:23:52.694 [2024-11-06 09:16:29.080022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.694 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:52.694 [ 00:23:52.694 { 00:23:52.694 "name": "BaseBdev2", 00:23:52.694 "aliases": [ 00:23:52.694 "0293a5e6-0265-42dc-8316-3278ff698e0d" 00:23:52.694 ], 00:23:52.694 "product_name": "Malloc disk", 00:23:52.694 "block_size": 4096, 00:23:52.694 "num_blocks": 8192, 00:23:52.694 "uuid": "0293a5e6-0265-42dc-8316-3278ff698e0d", 00:23:52.694 "md_size": 32, 00:23:52.694 "md_interleave": false, 00:23:52.694 "dif_type": 0, 00:23:52.694 "assigned_rate_limits": { 00:23:52.694 "rw_ios_per_sec": 0, 00:23:52.694 "rw_mbytes_per_sec": 0, 00:23:52.694 "r_mbytes_per_sec": 0, 00:23:52.694 "w_mbytes_per_sec": 0 00:23:52.694 }, 00:23:52.694 "claimed": true, 00:23:52.694 "claim_type": "exclusive_write", 00:23:52.694 "zoned": false, 00:23:52.694 "supported_io_types": { 00:23:52.694 "read": true, 00:23:52.694 "write": true, 00:23:52.694 "unmap": true, 00:23:52.694 "flush": true, 00:23:52.694 "reset": true, 00:23:52.694 "nvme_admin": false, 00:23:52.694 "nvme_io": false, 00:23:52.694 "nvme_io_md": false, 00:23:52.694 "write_zeroes": true, 00:23:52.694 "zcopy": true, 00:23:52.694 "get_zone_info": false, 00:23:52.694 "zone_management": false, 00:23:52.694 "zone_append": false, 00:23:52.694 "compare": false, 00:23:52.694 "compare_and_write": false, 00:23:52.694 "abort": true, 00:23:52.694 "seek_hole": false, 00:23:52.694 "seek_data": false, 00:23:52.694 "copy": true, 00:23:52.694 "nvme_iov_md": false 00:23:52.694 }, 00:23:52.694 "memory_domains": [ 00:23:52.694 { 00:23:52.694 "dma_device_id": "system", 00:23:52.694 "dma_device_type": 1 00:23:52.694 }, 00:23:52.694 { 00:23:52.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.694 "dma_device_type": 2 00:23:52.694 } 00:23:52.694 ], 00:23:52.694 "driver_specific": {} 00:23:52.694 } 00:23:52.694 ] 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:52.695 "name": "Existed_Raid", 00:23:52.695 "uuid": "86543e11-8fb1-4823-a39b-45c4f86dc0e7", 00:23:52.695 "strip_size_kb": 0, 00:23:52.695 "state": "online", 00:23:52.695 "raid_level": "raid1", 00:23:52.695 "superblock": true, 00:23:52.695 "num_base_bdevs": 2, 00:23:52.695 "num_base_bdevs_discovered": 2, 00:23:52.695 "num_base_bdevs_operational": 2, 00:23:52.695 "base_bdevs_list": [ 00:23:52.695 { 00:23:52.695 "name": "BaseBdev1", 00:23:52.695 "uuid": "8a2e7c60-1e8c-4b69-ac61-1d9116b8aec2", 00:23:52.695 "is_configured": true, 00:23:52.695 "data_offset": 256, 00:23:52.695 "data_size": 7936 00:23:52.695 }, 00:23:52.695 { 00:23:52.695 "name": "BaseBdev2", 00:23:52.695 "uuid": "0293a5e6-0265-42dc-8316-3278ff698e0d", 00:23:52.695 "is_configured": true, 00:23:52.695 "data_offset": 256, 00:23:52.695 "data_size": 7936 00:23:52.695 } 00:23:52.695 ] 00:23:52.695 }' 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:52.695 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:53.261 [2024-11-06 09:16:29.627894] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:53.261 "name": "Existed_Raid", 00:23:53.261 "aliases": [ 00:23:53.261 "86543e11-8fb1-4823-a39b-45c4f86dc0e7" 00:23:53.261 ], 00:23:53.261 "product_name": "Raid Volume", 00:23:53.261 "block_size": 4096, 00:23:53.261 "num_blocks": 7936, 00:23:53.261 "uuid": "86543e11-8fb1-4823-a39b-45c4f86dc0e7", 00:23:53.261 "md_size": 32, 00:23:53.261 "md_interleave": false, 00:23:53.261 "dif_type": 0, 00:23:53.261 "assigned_rate_limits": { 00:23:53.261 "rw_ios_per_sec": 0, 00:23:53.261 "rw_mbytes_per_sec": 0, 00:23:53.261 "r_mbytes_per_sec": 0, 00:23:53.261 "w_mbytes_per_sec": 0 00:23:53.261 }, 00:23:53.261 "claimed": false, 00:23:53.261 "zoned": false, 00:23:53.261 "supported_io_types": { 00:23:53.261 "read": true, 00:23:53.261 "write": true, 00:23:53.261 "unmap": false, 00:23:53.261 "flush": false, 00:23:53.261 "reset": true, 00:23:53.261 "nvme_admin": false, 00:23:53.261 "nvme_io": false, 00:23:53.261 "nvme_io_md": false, 00:23:53.261 "write_zeroes": true, 00:23:53.261 "zcopy": false, 00:23:53.261 "get_zone_info": false, 00:23:53.261 "zone_management": false, 00:23:53.261 "zone_append": false, 00:23:53.261 "compare": false, 00:23:53.261 "compare_and_write": false, 00:23:53.261 "abort": false, 00:23:53.261 "seek_hole": false, 00:23:53.261 "seek_data": false, 00:23:53.261 "copy": false, 00:23:53.261 "nvme_iov_md": false 00:23:53.261 }, 00:23:53.261 "memory_domains": [ 00:23:53.261 { 00:23:53.261 "dma_device_id": "system", 00:23:53.261 "dma_device_type": 1 00:23:53.261 }, 00:23:53.261 { 00:23:53.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.261 "dma_device_type": 2 00:23:53.261 }, 00:23:53.261 { 00:23:53.261 "dma_device_id": "system", 00:23:53.261 "dma_device_type": 1 00:23:53.261 }, 00:23:53.261 { 00:23:53.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.261 "dma_device_type": 2 00:23:53.261 } 00:23:53.261 ], 00:23:53.261 "driver_specific": { 00:23:53.261 "raid": { 00:23:53.261 "uuid": "86543e11-8fb1-4823-a39b-45c4f86dc0e7", 00:23:53.261 "strip_size_kb": 0, 00:23:53.261 "state": "online", 00:23:53.261 "raid_level": "raid1", 00:23:53.261 "superblock": true, 00:23:53.261 "num_base_bdevs": 2, 00:23:53.261 "num_base_bdevs_discovered": 2, 00:23:53.261 "num_base_bdevs_operational": 2, 00:23:53.261 "base_bdevs_list": [ 00:23:53.261 { 00:23:53.261 "name": "BaseBdev1", 00:23:53.261 "uuid": "8a2e7c60-1e8c-4b69-ac61-1d9116b8aec2", 00:23:53.261 "is_configured": true, 00:23:53.261 "data_offset": 256, 00:23:53.261 "data_size": 7936 00:23:53.261 }, 00:23:53.261 { 00:23:53.261 "name": "BaseBdev2", 00:23:53.261 "uuid": "0293a5e6-0265-42dc-8316-3278ff698e0d", 00:23:53.261 "is_configured": true, 00:23:53.261 "data_offset": 256, 00:23:53.261 "data_size": 7936 00:23:53.261 } 00:23:53.261 ] 00:23:53.261 } 00:23:53.261 } 00:23:53.261 }' 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:23:53.261 BaseBdev2' 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:53.261 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:53.519 [2024-11-06 09:16:29.903660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:53.519 09:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:53.519 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:53.519 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:53.519 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:53.519 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:53.519 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:53.519 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:53.519 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:53.519 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.519 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:53.519 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:53.519 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.519 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:53.519 "name": "Existed_Raid", 00:23:53.519 "uuid": "86543e11-8fb1-4823-a39b-45c4f86dc0e7", 00:23:53.519 "strip_size_kb": 0, 00:23:53.519 "state": "online", 00:23:53.519 "raid_level": "raid1", 00:23:53.519 "superblock": true, 00:23:53.519 "num_base_bdevs": 2, 00:23:53.519 "num_base_bdevs_discovered": 1, 00:23:53.519 "num_base_bdevs_operational": 1, 00:23:53.519 "base_bdevs_list": [ 00:23:53.519 { 00:23:53.520 "name": null, 00:23:53.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.520 "is_configured": false, 00:23:53.520 "data_offset": 0, 00:23:53.520 "data_size": 7936 00:23:53.520 }, 00:23:53.520 { 00:23:53.520 "name": "BaseBdev2", 00:23:53.520 "uuid": "0293a5e6-0265-42dc-8316-3278ff698e0d", 00:23:53.520 "is_configured": true, 00:23:53.520 "data_offset": 256, 00:23:53.520 "data_size": 7936 00:23:53.520 } 00:23:53.520 ] 00:23:53.520 }' 00:23:53.520 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:53.520 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:54.086 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:23:54.086 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:54.086 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:23:54.086 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.086 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.086 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:54.086 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.086 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:23:54.086 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:54.086 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:23:54.086 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.086 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:54.086 [2024-11-06 09:16:30.553915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:54.086 [2024-11-06 09:16:30.554044] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:54.344 [2024-11-06 09:16:30.648793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:54.344 [2024-11-06 09:16:30.648873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:54.344 [2024-11-06 09:16:30.648895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:54.344 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.344 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:23:54.344 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:23:54.344 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:54.344 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.344 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:54.344 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:23:54.344 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.344 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87535 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87535 ']' 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 87535 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87535 00:23:54.345 killing process with pid 87535 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87535' 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 87535 00:23:54.345 [2024-11-06 09:16:30.743456] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:54.345 09:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 87535 00:23:54.345 [2024-11-06 09:16:30.758275] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:55.277 09:16:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:23:55.277 00:23:55.277 real 0m5.582s 00:23:55.277 user 0m8.471s 00:23:55.277 sys 0m0.768s 00:23:55.277 ************************************ 00:23:55.277 END TEST raid_state_function_test_sb_md_separate 00:23:55.277 ************************************ 00:23:55.277 09:16:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:55.278 09:16:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:55.535 09:16:31 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:23:55.535 09:16:31 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:23:55.535 09:16:31 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:55.535 09:16:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:55.535 ************************************ 00:23:55.535 START TEST raid_superblock_test_md_separate 00:23:55.535 ************************************ 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87788 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87788 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 87788 ']' 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:55.535 09:16:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.536 09:16:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:55.536 09:16:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:55.536 [2024-11-06 09:16:31.976409] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:23:55.536 [2024-11-06 09:16:31.976868] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87788 ] 00:23:55.794 [2024-11-06 09:16:32.166200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.794 [2024-11-06 09:16:32.325787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.051 [2024-11-06 09:16:32.533311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.051 [2024-11-06 09:16:32.533536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.618 09:16:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:56.618 malloc1 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:56.618 [2024-11-06 09:16:33.030892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:56.618 [2024-11-06 09:16:33.031120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.618 [2024-11-06 09:16:33.031203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:56.618 [2024-11-06 09:16:33.031335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.618 [2024-11-06 09:16:33.034008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.618 [2024-11-06 09:16:33.034053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:56.618 pt1 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:56.618 malloc2 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:56.618 [2024-11-06 09:16:33.088336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:56.618 [2024-11-06 09:16:33.088543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:56.618 [2024-11-06 09:16:33.088587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:56.618 [2024-11-06 09:16:33.088604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:56.618 [2024-11-06 09:16:33.091187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:56.618 [2024-11-06 09:16:33.091231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:56.618 pt2 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:56.618 [2024-11-06 09:16:33.100376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:56.618 [2024-11-06 09:16:33.102845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:56.618 [2024-11-06 09:16:33.103093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:56.618 [2024-11-06 09:16:33.103121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:56.618 [2024-11-06 09:16:33.103228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:56.618 [2024-11-06 09:16:33.103396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:56.618 [2024-11-06 09:16:33.103417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:56.618 [2024-11-06 09:16:33.103556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.618 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.619 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.877 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:56.877 "name": "raid_bdev1", 00:23:56.877 "uuid": "84838b71-8dcb-43ac-a083-ec68a259199c", 00:23:56.877 "strip_size_kb": 0, 00:23:56.877 "state": "online", 00:23:56.877 "raid_level": "raid1", 00:23:56.877 "superblock": true, 00:23:56.877 "num_base_bdevs": 2, 00:23:56.877 "num_base_bdevs_discovered": 2, 00:23:56.877 "num_base_bdevs_operational": 2, 00:23:56.877 "base_bdevs_list": [ 00:23:56.877 { 00:23:56.877 "name": "pt1", 00:23:56.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:56.877 "is_configured": true, 00:23:56.877 "data_offset": 256, 00:23:56.877 "data_size": 7936 00:23:56.877 }, 00:23:56.877 { 00:23:56.877 "name": "pt2", 00:23:56.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:56.877 "is_configured": true, 00:23:56.877 "data_offset": 256, 00:23:56.877 "data_size": 7936 00:23:56.877 } 00:23:56.877 ] 00:23:56.877 }' 00:23:56.877 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:56.877 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:57.135 [2024-11-06 09:16:33.604876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:57.135 "name": "raid_bdev1", 00:23:57.135 "aliases": [ 00:23:57.135 "84838b71-8dcb-43ac-a083-ec68a259199c" 00:23:57.135 ], 00:23:57.135 "product_name": "Raid Volume", 00:23:57.135 "block_size": 4096, 00:23:57.135 "num_blocks": 7936, 00:23:57.135 "uuid": "84838b71-8dcb-43ac-a083-ec68a259199c", 00:23:57.135 "md_size": 32, 00:23:57.135 "md_interleave": false, 00:23:57.135 "dif_type": 0, 00:23:57.135 "assigned_rate_limits": { 00:23:57.135 "rw_ios_per_sec": 0, 00:23:57.135 "rw_mbytes_per_sec": 0, 00:23:57.135 "r_mbytes_per_sec": 0, 00:23:57.135 "w_mbytes_per_sec": 0 00:23:57.135 }, 00:23:57.135 "claimed": false, 00:23:57.135 "zoned": false, 00:23:57.135 "supported_io_types": { 00:23:57.135 "read": true, 00:23:57.135 "write": true, 00:23:57.135 "unmap": false, 00:23:57.135 "flush": false, 00:23:57.135 "reset": true, 00:23:57.135 "nvme_admin": false, 00:23:57.135 "nvme_io": false, 00:23:57.135 "nvme_io_md": false, 00:23:57.135 "write_zeroes": true, 00:23:57.135 "zcopy": false, 00:23:57.135 "get_zone_info": false, 00:23:57.135 "zone_management": false, 00:23:57.135 "zone_append": false, 00:23:57.135 "compare": false, 00:23:57.135 "compare_and_write": false, 00:23:57.135 "abort": false, 00:23:57.135 "seek_hole": false, 00:23:57.135 "seek_data": false, 00:23:57.135 "copy": false, 00:23:57.135 "nvme_iov_md": false 00:23:57.135 }, 00:23:57.135 "memory_domains": [ 00:23:57.135 { 00:23:57.135 "dma_device_id": "system", 00:23:57.135 "dma_device_type": 1 00:23:57.135 }, 00:23:57.135 { 00:23:57.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.135 "dma_device_type": 2 00:23:57.135 }, 00:23:57.135 { 00:23:57.135 "dma_device_id": "system", 00:23:57.135 "dma_device_type": 1 00:23:57.135 }, 00:23:57.135 { 00:23:57.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.135 "dma_device_type": 2 00:23:57.135 } 00:23:57.135 ], 00:23:57.135 "driver_specific": { 00:23:57.135 "raid": { 00:23:57.135 "uuid": "84838b71-8dcb-43ac-a083-ec68a259199c", 00:23:57.135 "strip_size_kb": 0, 00:23:57.135 "state": "online", 00:23:57.135 "raid_level": "raid1", 00:23:57.135 "superblock": true, 00:23:57.135 "num_base_bdevs": 2, 00:23:57.135 "num_base_bdevs_discovered": 2, 00:23:57.135 "num_base_bdevs_operational": 2, 00:23:57.135 "base_bdevs_list": [ 00:23:57.135 { 00:23:57.135 "name": "pt1", 00:23:57.135 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:57.135 "is_configured": true, 00:23:57.135 "data_offset": 256, 00:23:57.135 "data_size": 7936 00:23:57.135 }, 00:23:57.135 { 00:23:57.135 "name": "pt2", 00:23:57.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:57.135 "is_configured": true, 00:23:57.135 "data_offset": 256, 00:23:57.135 "data_size": 7936 00:23:57.135 } 00:23:57.135 ] 00:23:57.135 } 00:23:57.135 } 00:23:57.135 }' 00:23:57.135 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:57.393 pt2' 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.393 [2024-11-06 09:16:33.880918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:57.393 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=84838b71-8dcb-43ac-a083-ec68a259199c 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 84838b71-8dcb-43ac-a083-ec68a259199c ']' 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.652 [2024-11-06 09:16:33.948550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:57.652 [2024-11-06 09:16:33.948732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:57.652 [2024-11-06 09:16:33.949008] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:57.652 [2024-11-06 09:16:33.949204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:57.652 [2024-11-06 09:16:33.949369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.652 09:16:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:23:57.652 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.653 [2024-11-06 09:16:34.076644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:57.653 [2024-11-06 09:16:34.079290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:57.653 [2024-11-06 09:16:34.079412] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:57.653 [2024-11-06 09:16:34.079496] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:57.653 [2024-11-06 09:16:34.079524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:57.653 [2024-11-06 09:16:34.079542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:57.653 request: 00:23:57.653 { 00:23:57.653 "name": "raid_bdev1", 00:23:57.653 "raid_level": "raid1", 00:23:57.653 "base_bdevs": [ 00:23:57.653 "malloc1", 00:23:57.653 "malloc2" 00:23:57.653 ], 00:23:57.653 "superblock": false, 00:23:57.653 "method": "bdev_raid_create", 00:23:57.653 "req_id": 1 00:23:57.653 } 00:23:57.653 Got JSON-RPC error response 00:23:57.653 response: 00:23:57.653 { 00:23:57.653 "code": -17, 00:23:57.653 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:57.653 } 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.653 [2024-11-06 09:16:34.136622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:57.653 [2024-11-06 09:16:34.136933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:57.653 [2024-11-06 09:16:34.137130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:57.653 [2024-11-06 09:16:34.137301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:57.653 [2024-11-06 09:16:34.140016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:57.653 [2024-11-06 09:16:34.140191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:57.653 [2024-11-06 09:16:34.140376] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:57.653 [2024-11-06 09:16:34.140572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:57.653 pt1 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.653 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.911 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:57.911 "name": "raid_bdev1", 00:23:57.911 "uuid": "84838b71-8dcb-43ac-a083-ec68a259199c", 00:23:57.911 "strip_size_kb": 0, 00:23:57.912 "state": "configuring", 00:23:57.912 "raid_level": "raid1", 00:23:57.912 "superblock": true, 00:23:57.912 "num_base_bdevs": 2, 00:23:57.912 "num_base_bdevs_discovered": 1, 00:23:57.912 "num_base_bdevs_operational": 2, 00:23:57.912 "base_bdevs_list": [ 00:23:57.912 { 00:23:57.912 "name": "pt1", 00:23:57.912 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:57.912 "is_configured": true, 00:23:57.912 "data_offset": 256, 00:23:57.912 "data_size": 7936 00:23:57.912 }, 00:23:57.912 { 00:23:57.912 "name": null, 00:23:57.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:57.912 "is_configured": false, 00:23:57.912 "data_offset": 256, 00:23:57.912 "data_size": 7936 00:23:57.912 } 00:23:57.912 ] 00:23:57.912 }' 00:23:57.912 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:57.912 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:58.170 [2024-11-06 09:16:34.641039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:58.170 [2024-11-06 09:16:34.641140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:58.170 [2024-11-06 09:16:34.641189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:58.170 [2024-11-06 09:16:34.641206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:58.170 [2024-11-06 09:16:34.641485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:58.170 [2024-11-06 09:16:34.641517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:58.170 [2024-11-06 09:16:34.641583] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:58.170 [2024-11-06 09:16:34.641618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:58.170 [2024-11-06 09:16:34.641764] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:58.170 [2024-11-06 09:16:34.641786] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:58.170 [2024-11-06 09:16:34.641893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:58.170 [2024-11-06 09:16:34.642058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:58.170 [2024-11-06 09:16:34.642074] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:58.170 [2024-11-06 09:16:34.642197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.170 pt2 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:58.170 "name": "raid_bdev1", 00:23:58.170 "uuid": "84838b71-8dcb-43ac-a083-ec68a259199c", 00:23:58.170 "strip_size_kb": 0, 00:23:58.170 "state": "online", 00:23:58.170 "raid_level": "raid1", 00:23:58.170 "superblock": true, 00:23:58.170 "num_base_bdevs": 2, 00:23:58.170 "num_base_bdevs_discovered": 2, 00:23:58.170 "num_base_bdevs_operational": 2, 00:23:58.170 "base_bdevs_list": [ 00:23:58.170 { 00:23:58.170 "name": "pt1", 00:23:58.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:58.170 "is_configured": true, 00:23:58.170 "data_offset": 256, 00:23:58.170 "data_size": 7936 00:23:58.170 }, 00:23:58.170 { 00:23:58.170 "name": "pt2", 00:23:58.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:58.170 "is_configured": true, 00:23:58.170 "data_offset": 256, 00:23:58.170 "data_size": 7936 00:23:58.170 } 00:23:58.170 ] 00:23:58.170 }' 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:58.170 09:16:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:58.737 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:23:58.737 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:23:58.737 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:23:58.737 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:23:58.737 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:23:58.737 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:23:58.737 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:23:58.737 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:58.737 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.737 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:58.737 [2024-11-06 09:16:35.169520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:58.737 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.737 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:23:58.737 "name": "raid_bdev1", 00:23:58.737 "aliases": [ 00:23:58.737 "84838b71-8dcb-43ac-a083-ec68a259199c" 00:23:58.737 ], 00:23:58.737 "product_name": "Raid Volume", 00:23:58.737 "block_size": 4096, 00:23:58.737 "num_blocks": 7936, 00:23:58.737 "uuid": "84838b71-8dcb-43ac-a083-ec68a259199c", 00:23:58.737 "md_size": 32, 00:23:58.737 "md_interleave": false, 00:23:58.737 "dif_type": 0, 00:23:58.737 "assigned_rate_limits": { 00:23:58.737 "rw_ios_per_sec": 0, 00:23:58.737 "rw_mbytes_per_sec": 0, 00:23:58.737 "r_mbytes_per_sec": 0, 00:23:58.737 "w_mbytes_per_sec": 0 00:23:58.737 }, 00:23:58.737 "claimed": false, 00:23:58.737 "zoned": false, 00:23:58.737 "supported_io_types": { 00:23:58.737 "read": true, 00:23:58.738 "write": true, 00:23:58.738 "unmap": false, 00:23:58.738 "flush": false, 00:23:58.738 "reset": true, 00:23:58.738 "nvme_admin": false, 00:23:58.738 "nvme_io": false, 00:23:58.738 "nvme_io_md": false, 00:23:58.738 "write_zeroes": true, 00:23:58.738 "zcopy": false, 00:23:58.738 "get_zone_info": false, 00:23:58.738 "zone_management": false, 00:23:58.738 "zone_append": false, 00:23:58.738 "compare": false, 00:23:58.738 "compare_and_write": false, 00:23:58.738 "abort": false, 00:23:58.738 "seek_hole": false, 00:23:58.738 "seek_data": false, 00:23:58.738 "copy": false, 00:23:58.738 "nvme_iov_md": false 00:23:58.738 }, 00:23:58.738 "memory_domains": [ 00:23:58.738 { 00:23:58.738 "dma_device_id": "system", 00:23:58.738 "dma_device_type": 1 00:23:58.738 }, 00:23:58.738 { 00:23:58.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.738 "dma_device_type": 2 00:23:58.738 }, 00:23:58.738 { 00:23:58.738 "dma_device_id": "system", 00:23:58.738 "dma_device_type": 1 00:23:58.738 }, 00:23:58.738 { 00:23:58.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.738 "dma_device_type": 2 00:23:58.738 } 00:23:58.738 ], 00:23:58.738 "driver_specific": { 00:23:58.738 "raid": { 00:23:58.738 "uuid": "84838b71-8dcb-43ac-a083-ec68a259199c", 00:23:58.738 "strip_size_kb": 0, 00:23:58.738 "state": "online", 00:23:58.738 "raid_level": "raid1", 00:23:58.738 "superblock": true, 00:23:58.738 "num_base_bdevs": 2, 00:23:58.738 "num_base_bdevs_discovered": 2, 00:23:58.738 "num_base_bdevs_operational": 2, 00:23:58.738 "base_bdevs_list": [ 00:23:58.738 { 00:23:58.738 "name": "pt1", 00:23:58.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:58.738 "is_configured": true, 00:23:58.738 "data_offset": 256, 00:23:58.738 "data_size": 7936 00:23:58.738 }, 00:23:58.738 { 00:23:58.738 "name": "pt2", 00:23:58.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:58.738 "is_configured": true, 00:23:58.738 "data_offset": 256, 00:23:58.738 "data_size": 7936 00:23:58.738 } 00:23:58.738 ] 00:23:58.738 } 00:23:58.738 } 00:23:58.738 }' 00:23:58.738 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:58.738 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:23:58.738 pt2' 00:23:58.738 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:58.999 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:23:58.999 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:58.999 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:23:58.999 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:58.999 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.999 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:58.999 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.999 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:58.999 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:58.999 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:23:58.999 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:23:59.000 [2024-11-06 09:16:35.417588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 84838b71-8dcb-43ac-a083-ec68a259199c '!=' 84838b71-8dcb-43ac-a083-ec68a259199c ']' 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:59.000 [2024-11-06 09:16:35.465315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.000 "name": "raid_bdev1", 00:23:59.000 "uuid": "84838b71-8dcb-43ac-a083-ec68a259199c", 00:23:59.000 "strip_size_kb": 0, 00:23:59.000 "state": "online", 00:23:59.000 "raid_level": "raid1", 00:23:59.000 "superblock": true, 00:23:59.000 "num_base_bdevs": 2, 00:23:59.000 "num_base_bdevs_discovered": 1, 00:23:59.000 "num_base_bdevs_operational": 1, 00:23:59.000 "base_bdevs_list": [ 00:23:59.000 { 00:23:59.000 "name": null, 00:23:59.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.000 "is_configured": false, 00:23:59.000 "data_offset": 0, 00:23:59.000 "data_size": 7936 00:23:59.000 }, 00:23:59.000 { 00:23:59.000 "name": "pt2", 00:23:59.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:59.000 "is_configured": true, 00:23:59.000 "data_offset": 256, 00:23:59.000 "data_size": 7936 00:23:59.000 } 00:23:59.000 ] 00:23:59.000 }' 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.000 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:59.566 09:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:59.566 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.566 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:59.566 [2024-11-06 09:16:35.993420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:59.566 [2024-11-06 09:16:35.993597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:59.566 [2024-11-06 09:16:35.993829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.566 [2024-11-06 09:16:35.994015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:59.566 [2024-11-06 09:16:35.994152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:59.566 09:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:59.566 [2024-11-06 09:16:36.069459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:59.566 [2024-11-06 09:16:36.069551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.566 [2024-11-06 09:16:36.069582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:59.566 [2024-11-06 09:16:36.069600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.566 [2024-11-06 09:16:36.072280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.566 [2024-11-06 09:16:36.072333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:59.566 [2024-11-06 09:16:36.072408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:59.566 [2024-11-06 09:16:36.072474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:59.566 [2024-11-06 09:16:36.072598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:23:59.566 [2024-11-06 09:16:36.072620] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:23:59.566 [2024-11-06 09:16:36.072717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:59.566 [2024-11-06 09:16:36.072892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:23:59.566 [2024-11-06 09:16:36.072908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:23:59.566 [2024-11-06 09:16:36.073036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:59.566 pt2 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:59.566 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:59.567 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:59.567 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:59.567 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:59.567 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:59.567 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:59.567 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:59.567 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:59.567 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.567 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.567 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:23:59.567 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.824 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:59.824 "name": "raid_bdev1", 00:23:59.824 "uuid": "84838b71-8dcb-43ac-a083-ec68a259199c", 00:23:59.824 "strip_size_kb": 0, 00:23:59.824 "state": "online", 00:23:59.824 "raid_level": "raid1", 00:23:59.824 "superblock": true, 00:23:59.824 "num_base_bdevs": 2, 00:23:59.824 "num_base_bdevs_discovered": 1, 00:23:59.824 "num_base_bdevs_operational": 1, 00:23:59.824 "base_bdevs_list": [ 00:23:59.824 { 00:23:59.824 "name": null, 00:23:59.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.824 "is_configured": false, 00:23:59.824 "data_offset": 256, 00:23:59.824 "data_size": 7936 00:23:59.824 }, 00:23:59.824 { 00:23:59.824 "name": "pt2", 00:23:59.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:59.824 "is_configured": true, 00:23:59.824 "data_offset": 256, 00:23:59.824 "data_size": 7936 00:23:59.824 } 00:23:59.824 ] 00:23:59.824 }' 00:23:59.824 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:59.824 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:00.391 [2024-11-06 09:16:36.629566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:00.391 [2024-11-06 09:16:36.629604] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:00.391 [2024-11-06 09:16:36.629700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:00.391 [2024-11-06 09:16:36.629768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:00.391 [2024-11-06 09:16:36.629783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:00.391 [2024-11-06 09:16:36.693609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:00.391 [2024-11-06 09:16:36.693677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.391 [2024-11-06 09:16:36.693707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:00.391 [2024-11-06 09:16:36.693722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.391 [2024-11-06 09:16:36.696364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.391 [2024-11-06 09:16:36.696410] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:00.391 [2024-11-06 09:16:36.696483] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:00.391 [2024-11-06 09:16:36.696540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:00.391 [2024-11-06 09:16:36.696717] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:00.391 [2024-11-06 09:16:36.696735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:00.391 [2024-11-06 09:16:36.696758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:00.391 [2024-11-06 09:16:36.696852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:00.391 [2024-11-06 09:16:36.696951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:00.391 [2024-11-06 09:16:36.696967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:00.391 [2024-11-06 09:16:36.697056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:00.391 [2024-11-06 09:16:36.697193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:00.391 [2024-11-06 09:16:36.697220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:00.391 [2024-11-06 09:16:36.697353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:00.391 pt1 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:00.391 "name": "raid_bdev1", 00:24:00.391 "uuid": "84838b71-8dcb-43ac-a083-ec68a259199c", 00:24:00.391 "strip_size_kb": 0, 00:24:00.391 "state": "online", 00:24:00.391 "raid_level": "raid1", 00:24:00.391 "superblock": true, 00:24:00.391 "num_base_bdevs": 2, 00:24:00.391 "num_base_bdevs_discovered": 1, 00:24:00.391 "num_base_bdevs_operational": 1, 00:24:00.391 "base_bdevs_list": [ 00:24:00.391 { 00:24:00.391 "name": null, 00:24:00.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.391 "is_configured": false, 00:24:00.391 "data_offset": 256, 00:24:00.391 "data_size": 7936 00:24:00.391 }, 00:24:00.391 { 00:24:00.391 "name": "pt2", 00:24:00.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:00.391 "is_configured": true, 00:24:00.391 "data_offset": 256, 00:24:00.391 "data_size": 7936 00:24:00.391 } 00:24:00.391 ] 00:24:00.391 }' 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:00.391 09:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:00.965 [2024-11-06 09:16:37.266067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 84838b71-8dcb-43ac-a083-ec68a259199c '!=' 84838b71-8dcb-43ac-a083-ec68a259199c ']' 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87788 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 87788 ']' 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 87788 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87788 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:00.965 killing process with pid 87788 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87788' 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 87788 00:24:00.965 [2024-11-06 09:16:37.338201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:00.965 09:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 87788 00:24:00.965 [2024-11-06 09:16:37.338307] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:00.965 [2024-11-06 09:16:37.338377] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:00.965 [2024-11-06 09:16:37.338402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:01.222 [2024-11-06 09:16:37.537064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:02.156 09:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:24:02.156 00:24:02.156 real 0m6.717s 00:24:02.156 user 0m10.592s 00:24:02.156 sys 0m1.020s 00:24:02.156 09:16:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:02.156 ************************************ 00:24:02.156 END TEST raid_superblock_test_md_separate 00:24:02.156 ************************************ 00:24:02.156 09:16:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:02.156 09:16:38 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:24:02.156 09:16:38 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:24:02.156 09:16:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:24:02.156 09:16:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:02.156 09:16:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:02.156 ************************************ 00:24:02.156 START TEST raid_rebuild_test_sb_md_separate 00:24:02.156 ************************************ 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88117 00:24:02.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88117 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 88117 ']' 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:02.156 09:16:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:02.414 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:02.414 Zero copy mechanism will not be used. 00:24:02.414 [2024-11-06 09:16:38.724118] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:24:02.414 [2024-11-06 09:16:38.724287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88117 ] 00:24:02.414 [2024-11-06 09:16:38.902348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.672 [2024-11-06 09:16:39.035247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.930 [2024-11-06 09:16:39.245107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:02.930 [2024-11-06 09:16:39.245156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:03.188 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:03.188 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:24:03.188 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:03.188 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:24:03.188 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.188 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:03.446 BaseBdev1_malloc 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:03.446 [2024-11-06 09:16:39.753643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:03.446 [2024-11-06 09:16:39.753729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.446 [2024-11-06 09:16:39.753761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:03.446 [2024-11-06 09:16:39.753780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.446 [2024-11-06 09:16:39.756338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.446 [2024-11-06 09:16:39.756390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:03.446 BaseBdev1 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:03.446 BaseBdev2_malloc 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:03.446 [2024-11-06 09:16:39.821106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:03.446 [2024-11-06 09:16:39.821223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.446 [2024-11-06 09:16:39.821266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:03.446 [2024-11-06 09:16:39.821312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.446 [2024-11-06 09:16:39.824445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.446 [2024-11-06 09:16:39.824499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:03.446 BaseBdev2 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:03.446 spare_malloc 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:03.446 spare_delay 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:03.446 [2024-11-06 09:16:39.899564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:03.446 [2024-11-06 09:16:39.899650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:03.446 [2024-11-06 09:16:39.899696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:03.446 [2024-11-06 09:16:39.899718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:03.446 [2024-11-06 09:16:39.902278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:03.446 [2024-11-06 09:16:39.902329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:03.446 spare 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:03.446 [2024-11-06 09:16:39.907599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:03.446 [2024-11-06 09:16:39.910039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:03.446 [2024-11-06 09:16:39.910272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:03.446 [2024-11-06 09:16:39.910297] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:03.446 [2024-11-06 09:16:39.910393] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:03.446 [2024-11-06 09:16:39.910572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:03.446 [2024-11-06 09:16:39.910604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:03.446 [2024-11-06 09:16:39.910743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:03.446 "name": "raid_bdev1", 00:24:03.446 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:03.446 "strip_size_kb": 0, 00:24:03.446 "state": "online", 00:24:03.446 "raid_level": "raid1", 00:24:03.446 "superblock": true, 00:24:03.446 "num_base_bdevs": 2, 00:24:03.446 "num_base_bdevs_discovered": 2, 00:24:03.446 "num_base_bdevs_operational": 2, 00:24:03.446 "base_bdevs_list": [ 00:24:03.446 { 00:24:03.446 "name": "BaseBdev1", 00:24:03.446 "uuid": "e9198804-43ba-561b-88b4-0464eae7bce5", 00:24:03.446 "is_configured": true, 00:24:03.446 "data_offset": 256, 00:24:03.446 "data_size": 7936 00:24:03.446 }, 00:24:03.446 { 00:24:03.446 "name": "BaseBdev2", 00:24:03.446 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:03.446 "is_configured": true, 00:24:03.446 "data_offset": 256, 00:24:03.446 "data_size": 7936 00:24:03.446 } 00:24:03.446 ] 00:24:03.446 }' 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:03.446 09:16:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:04.012 [2024-11-06 09:16:40.444121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:04.012 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:04.579 [2024-11-06 09:16:40.819943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:04.579 /dev/nbd0 00:24:04.579 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:04.579 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:04.579 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:04.579 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:24:04.579 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:04.579 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:04.579 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:04.579 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:24:04.579 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:04.580 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:04.580 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:04.580 1+0 records in 00:24:04.580 1+0 records out 00:24:04.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330466 s, 12.4 MB/s 00:24:04.580 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:04.580 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:24:04.580 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:04.580 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:04.580 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:24:04.580 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:04.580 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:04.580 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:24:04.580 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:24:04.580 09:16:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:24:05.515 7936+0 records in 00:24:05.515 7936+0 records out 00:24:05.515 32505856 bytes (33 MB, 31 MiB) copied, 0.935513 s, 34.7 MB/s 00:24:05.515 09:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:24:05.515 09:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:05.515 09:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:05.515 09:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:05.515 09:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:24:05.515 09:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:05.515 09:16:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:05.773 [2024-11-06 09:16:42.149403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.773 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:05.773 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:05.773 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:05.774 [2024-11-06 09:16:42.173545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:05.774 "name": "raid_bdev1", 00:24:05.774 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:05.774 "strip_size_kb": 0, 00:24:05.774 "state": "online", 00:24:05.774 "raid_level": "raid1", 00:24:05.774 "superblock": true, 00:24:05.774 "num_base_bdevs": 2, 00:24:05.774 "num_base_bdevs_discovered": 1, 00:24:05.774 "num_base_bdevs_operational": 1, 00:24:05.774 "base_bdevs_list": [ 00:24:05.774 { 00:24:05.774 "name": null, 00:24:05.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:05.774 "is_configured": false, 00:24:05.774 "data_offset": 0, 00:24:05.774 "data_size": 7936 00:24:05.774 }, 00:24:05.774 { 00:24:05.774 "name": "BaseBdev2", 00:24:05.774 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:05.774 "is_configured": true, 00:24:05.774 "data_offset": 256, 00:24:05.774 "data_size": 7936 00:24:05.774 } 00:24:05.774 ] 00:24:05.774 }' 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:05.774 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:06.349 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:06.349 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.349 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:06.349 [2024-11-06 09:16:42.705690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:06.349 [2024-11-06 09:16:42.719484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:24:06.349 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.349 09:16:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:06.349 [2024-11-06 09:16:42.722168] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:07.284 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:07.284 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:07.284 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:07.284 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:07.284 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:07.284 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.284 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.284 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.284 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:07.284 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.284 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:07.284 "name": "raid_bdev1", 00:24:07.284 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:07.284 "strip_size_kb": 0, 00:24:07.284 "state": "online", 00:24:07.284 "raid_level": "raid1", 00:24:07.284 "superblock": true, 00:24:07.284 "num_base_bdevs": 2, 00:24:07.284 "num_base_bdevs_discovered": 2, 00:24:07.284 "num_base_bdevs_operational": 2, 00:24:07.284 "process": { 00:24:07.284 "type": "rebuild", 00:24:07.284 "target": "spare", 00:24:07.284 "progress": { 00:24:07.284 "blocks": 2560, 00:24:07.284 "percent": 32 00:24:07.284 } 00:24:07.284 }, 00:24:07.284 "base_bdevs_list": [ 00:24:07.284 { 00:24:07.284 "name": "spare", 00:24:07.284 "uuid": "4f70fad5-47e4-5b37-8b1b-a3f30e6c1f69", 00:24:07.284 "is_configured": true, 00:24:07.284 "data_offset": 256, 00:24:07.284 "data_size": 7936 00:24:07.284 }, 00:24:07.284 { 00:24:07.284 "name": "BaseBdev2", 00:24:07.284 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:07.284 "is_configured": true, 00:24:07.284 "data_offset": 256, 00:24:07.284 "data_size": 7936 00:24:07.284 } 00:24:07.284 ] 00:24:07.284 }' 00:24:07.284 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:07.542 [2024-11-06 09:16:43.892104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:07.542 [2024-11-06 09:16:43.931508] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:07.542 [2024-11-06 09:16:43.931890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:07.542 [2024-11-06 09:16:43.931921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:07.542 [2024-11-06 09:16:43.931938] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:07.542 09:16:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.542 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:07.542 "name": "raid_bdev1", 00:24:07.542 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:07.542 "strip_size_kb": 0, 00:24:07.542 "state": "online", 00:24:07.542 "raid_level": "raid1", 00:24:07.542 "superblock": true, 00:24:07.542 "num_base_bdevs": 2, 00:24:07.542 "num_base_bdevs_discovered": 1, 00:24:07.542 "num_base_bdevs_operational": 1, 00:24:07.542 "base_bdevs_list": [ 00:24:07.542 { 00:24:07.542 "name": null, 00:24:07.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.542 "is_configured": false, 00:24:07.542 "data_offset": 0, 00:24:07.542 "data_size": 7936 00:24:07.542 }, 00:24:07.542 { 00:24:07.542 "name": "BaseBdev2", 00:24:07.542 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:07.542 "is_configured": true, 00:24:07.542 "data_offset": 256, 00:24:07.542 "data_size": 7936 00:24:07.542 } 00:24:07.542 ] 00:24:07.542 }' 00:24:07.542 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:07.542 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:08.108 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:08.108 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:08.108 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:08.108 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:08.108 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:08.108 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:08.108 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.108 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:08.108 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.108 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.108 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:08.108 "name": "raid_bdev1", 00:24:08.108 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:08.108 "strip_size_kb": 0, 00:24:08.108 "state": "online", 00:24:08.108 "raid_level": "raid1", 00:24:08.108 "superblock": true, 00:24:08.108 "num_base_bdevs": 2, 00:24:08.108 "num_base_bdevs_discovered": 1, 00:24:08.108 "num_base_bdevs_operational": 1, 00:24:08.108 "base_bdevs_list": [ 00:24:08.108 { 00:24:08.108 "name": null, 00:24:08.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:08.108 "is_configured": false, 00:24:08.108 "data_offset": 0, 00:24:08.108 "data_size": 7936 00:24:08.108 }, 00:24:08.108 { 00:24:08.108 "name": "BaseBdev2", 00:24:08.108 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:08.108 "is_configured": true, 00:24:08.108 "data_offset": 256, 00:24:08.108 "data_size": 7936 00:24:08.108 } 00:24:08.109 ] 00:24:08.109 }' 00:24:08.109 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:08.109 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:08.109 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:08.109 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:08.109 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:08.109 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.109 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:08.366 [2024-11-06 09:16:44.646231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:08.366 [2024-11-06 09:16:44.659244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:24:08.366 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.366 09:16:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:08.366 [2024-11-06 09:16:44.661796] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:09.335 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.335 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.335 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:09.335 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.336 "name": "raid_bdev1", 00:24:09.336 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:09.336 "strip_size_kb": 0, 00:24:09.336 "state": "online", 00:24:09.336 "raid_level": "raid1", 00:24:09.336 "superblock": true, 00:24:09.336 "num_base_bdevs": 2, 00:24:09.336 "num_base_bdevs_discovered": 2, 00:24:09.336 "num_base_bdevs_operational": 2, 00:24:09.336 "process": { 00:24:09.336 "type": "rebuild", 00:24:09.336 "target": "spare", 00:24:09.336 "progress": { 00:24:09.336 "blocks": 2560, 00:24:09.336 "percent": 32 00:24:09.336 } 00:24:09.336 }, 00:24:09.336 "base_bdevs_list": [ 00:24:09.336 { 00:24:09.336 "name": "spare", 00:24:09.336 "uuid": "4f70fad5-47e4-5b37-8b1b-a3f30e6c1f69", 00:24:09.336 "is_configured": true, 00:24:09.336 "data_offset": 256, 00:24:09.336 "data_size": 7936 00:24:09.336 }, 00:24:09.336 { 00:24:09.336 "name": "BaseBdev2", 00:24:09.336 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:09.336 "is_configured": true, 00:24:09.336 "data_offset": 256, 00:24:09.336 "data_size": 7936 00:24:09.336 } 00:24:09.336 ] 00:24:09.336 }' 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:09.336 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=765 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:09.336 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.594 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:09.594 "name": "raid_bdev1", 00:24:09.594 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:09.594 "strip_size_kb": 0, 00:24:09.594 "state": "online", 00:24:09.594 "raid_level": "raid1", 00:24:09.594 "superblock": true, 00:24:09.594 "num_base_bdevs": 2, 00:24:09.594 "num_base_bdevs_discovered": 2, 00:24:09.594 "num_base_bdevs_operational": 2, 00:24:09.594 "process": { 00:24:09.594 "type": "rebuild", 00:24:09.594 "target": "spare", 00:24:09.594 "progress": { 00:24:09.594 "blocks": 2816, 00:24:09.594 "percent": 35 00:24:09.594 } 00:24:09.594 }, 00:24:09.594 "base_bdevs_list": [ 00:24:09.594 { 00:24:09.594 "name": "spare", 00:24:09.594 "uuid": "4f70fad5-47e4-5b37-8b1b-a3f30e6c1f69", 00:24:09.594 "is_configured": true, 00:24:09.594 "data_offset": 256, 00:24:09.594 "data_size": 7936 00:24:09.594 }, 00:24:09.594 { 00:24:09.594 "name": "BaseBdev2", 00:24:09.594 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:09.594 "is_configured": true, 00:24:09.594 "data_offset": 256, 00:24:09.594 "data_size": 7936 00:24:09.594 } 00:24:09.594 ] 00:24:09.594 }' 00:24:09.594 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:09.594 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:09.594 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:09.594 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.594 09:16:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:10.527 09:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:10.527 09:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:10.527 09:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:10.527 09:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:10.527 09:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:10.527 09:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:10.527 09:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:10.527 09:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.527 09:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:10.527 09:16:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.527 09:16:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.527 09:16:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:10.527 "name": "raid_bdev1", 00:24:10.527 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:10.527 "strip_size_kb": 0, 00:24:10.527 "state": "online", 00:24:10.527 "raid_level": "raid1", 00:24:10.527 "superblock": true, 00:24:10.527 "num_base_bdevs": 2, 00:24:10.527 "num_base_bdevs_discovered": 2, 00:24:10.527 "num_base_bdevs_operational": 2, 00:24:10.527 "process": { 00:24:10.527 "type": "rebuild", 00:24:10.527 "target": "spare", 00:24:10.527 "progress": { 00:24:10.527 "blocks": 5888, 00:24:10.527 "percent": 74 00:24:10.527 } 00:24:10.527 }, 00:24:10.527 "base_bdevs_list": [ 00:24:10.527 { 00:24:10.527 "name": "spare", 00:24:10.527 "uuid": "4f70fad5-47e4-5b37-8b1b-a3f30e6c1f69", 00:24:10.527 "is_configured": true, 00:24:10.527 "data_offset": 256, 00:24:10.527 "data_size": 7936 00:24:10.527 }, 00:24:10.527 { 00:24:10.527 "name": "BaseBdev2", 00:24:10.527 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:10.527 "is_configured": true, 00:24:10.527 "data_offset": 256, 00:24:10.527 "data_size": 7936 00:24:10.527 } 00:24:10.527 ] 00:24:10.527 }' 00:24:10.527 09:16:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:10.785 09:16:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.785 09:16:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:10.785 09:16:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.785 09:16:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:11.352 [2024-11-06 09:16:47.784392] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:11.352 [2024-11-06 09:16:47.784498] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:11.352 [2024-11-06 09:16:47.784649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.920 "name": "raid_bdev1", 00:24:11.920 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:11.920 "strip_size_kb": 0, 00:24:11.920 "state": "online", 00:24:11.920 "raid_level": "raid1", 00:24:11.920 "superblock": true, 00:24:11.920 "num_base_bdevs": 2, 00:24:11.920 "num_base_bdevs_discovered": 2, 00:24:11.920 "num_base_bdevs_operational": 2, 00:24:11.920 "base_bdevs_list": [ 00:24:11.920 { 00:24:11.920 "name": "spare", 00:24:11.920 "uuid": "4f70fad5-47e4-5b37-8b1b-a3f30e6c1f69", 00:24:11.920 "is_configured": true, 00:24:11.920 "data_offset": 256, 00:24:11.920 "data_size": 7936 00:24:11.920 }, 00:24:11.920 { 00:24:11.920 "name": "BaseBdev2", 00:24:11.920 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:11.920 "is_configured": true, 00:24:11.920 "data_offset": 256, 00:24:11.920 "data_size": 7936 00:24:11.920 } 00:24:11.920 ] 00:24:11.920 }' 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:11.920 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.921 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:11.921 "name": "raid_bdev1", 00:24:11.921 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:11.921 "strip_size_kb": 0, 00:24:11.921 "state": "online", 00:24:11.921 "raid_level": "raid1", 00:24:11.921 "superblock": true, 00:24:11.921 "num_base_bdevs": 2, 00:24:11.921 "num_base_bdevs_discovered": 2, 00:24:11.921 "num_base_bdevs_operational": 2, 00:24:11.921 "base_bdevs_list": [ 00:24:11.921 { 00:24:11.921 "name": "spare", 00:24:11.921 "uuid": "4f70fad5-47e4-5b37-8b1b-a3f30e6c1f69", 00:24:11.921 "is_configured": true, 00:24:11.921 "data_offset": 256, 00:24:11.921 "data_size": 7936 00:24:11.921 }, 00:24:11.921 { 00:24:11.921 "name": "BaseBdev2", 00:24:11.921 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:11.921 "is_configured": true, 00:24:11.921 "data_offset": 256, 00:24:11.921 "data_size": 7936 00:24:11.921 } 00:24:11.921 ] 00:24:11.921 }' 00:24:11.921 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:11.921 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:11.921 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:12.179 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:12.179 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:12.179 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:12.179 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:12.179 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:12.179 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:12.179 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:12.179 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:12.179 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:12.179 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:12.179 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:12.180 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.180 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.180 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.180 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:12.180 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.180 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:12.180 "name": "raid_bdev1", 00:24:12.180 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:12.180 "strip_size_kb": 0, 00:24:12.180 "state": "online", 00:24:12.180 "raid_level": "raid1", 00:24:12.180 "superblock": true, 00:24:12.180 "num_base_bdevs": 2, 00:24:12.180 "num_base_bdevs_discovered": 2, 00:24:12.180 "num_base_bdevs_operational": 2, 00:24:12.180 "base_bdevs_list": [ 00:24:12.180 { 00:24:12.180 "name": "spare", 00:24:12.180 "uuid": "4f70fad5-47e4-5b37-8b1b-a3f30e6c1f69", 00:24:12.180 "is_configured": true, 00:24:12.180 "data_offset": 256, 00:24:12.180 "data_size": 7936 00:24:12.180 }, 00:24:12.180 { 00:24:12.180 "name": "BaseBdev2", 00:24:12.180 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:12.180 "is_configured": true, 00:24:12.180 "data_offset": 256, 00:24:12.180 "data_size": 7936 00:24:12.180 } 00:24:12.180 ] 00:24:12.180 }' 00:24:12.180 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:12.180 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:12.747 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:12.747 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.747 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:12.747 [2024-11-06 09:16:48.995303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:12.747 [2024-11-06 09:16:48.995340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:12.747 [2024-11-06 09:16:48.995454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:12.747 [2024-11-06 09:16:48.995539] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:12.747 [2024-11-06 09:16:48.995555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:12.747 09:16:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:12.747 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:13.007 /dev/nbd0 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:13.007 1+0 records in 00:24:13.007 1+0 records out 00:24:13.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322872 s, 12.7 MB/s 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:13.007 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:24:13.265 /dev/nbd1 00:24:13.265 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:13.265 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:13.266 1+0 records in 00:24:13.266 1+0 records out 00:24:13.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316678 s, 12.9 MB/s 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:13.266 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:13.525 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:24:13.525 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:24:13.525 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:13.525 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:13.525 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:24:13.525 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:13.525 09:16:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:24:13.785 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:13.785 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:13.785 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:13.785 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:13.785 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:13.785 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:13.785 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:24:13.785 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:24:13.785 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:13.785 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:14.044 [2024-11-06 09:16:50.533581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:14.044 [2024-11-06 09:16:50.533660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:14.044 [2024-11-06 09:16:50.533692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:14.044 [2024-11-06 09:16:50.533707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:14.044 [2024-11-06 09:16:50.536444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:14.044 [2024-11-06 09:16:50.536486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:14.044 [2024-11-06 09:16:50.536582] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:14.044 [2024-11-06 09:16:50.536650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:14.044 [2024-11-06 09:16:50.536823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:14.044 spare 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.044 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:14.303 [2024-11-06 09:16:50.636988] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:14.303 [2024-11-06 09:16:50.637057] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:24:14.303 [2024-11-06 09:16:50.637217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:24:14.303 [2024-11-06 09:16:50.637482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:14.303 [2024-11-06 09:16:50.637498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:14.303 [2024-11-06 09:16:50.637690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:14.303 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.303 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:14.303 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:14.303 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:14.303 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:14.303 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:14.303 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:14.303 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.303 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.303 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.303 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.303 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.304 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.304 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:14.304 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.304 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.304 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:14.304 "name": "raid_bdev1", 00:24:14.304 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:14.304 "strip_size_kb": 0, 00:24:14.304 "state": "online", 00:24:14.304 "raid_level": "raid1", 00:24:14.304 "superblock": true, 00:24:14.304 "num_base_bdevs": 2, 00:24:14.304 "num_base_bdevs_discovered": 2, 00:24:14.304 "num_base_bdevs_operational": 2, 00:24:14.304 "base_bdevs_list": [ 00:24:14.304 { 00:24:14.304 "name": "spare", 00:24:14.304 "uuid": "4f70fad5-47e4-5b37-8b1b-a3f30e6c1f69", 00:24:14.304 "is_configured": true, 00:24:14.304 "data_offset": 256, 00:24:14.304 "data_size": 7936 00:24:14.304 }, 00:24:14.304 { 00:24:14.304 "name": "BaseBdev2", 00:24:14.304 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:14.304 "is_configured": true, 00:24:14.304 "data_offset": 256, 00:24:14.304 "data_size": 7936 00:24:14.304 } 00:24:14.304 ] 00:24:14.304 }' 00:24:14.304 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:14.304 09:16:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:14.871 "name": "raid_bdev1", 00:24:14.871 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:14.871 "strip_size_kb": 0, 00:24:14.871 "state": "online", 00:24:14.871 "raid_level": "raid1", 00:24:14.871 "superblock": true, 00:24:14.871 "num_base_bdevs": 2, 00:24:14.871 "num_base_bdevs_discovered": 2, 00:24:14.871 "num_base_bdevs_operational": 2, 00:24:14.871 "base_bdevs_list": [ 00:24:14.871 { 00:24:14.871 "name": "spare", 00:24:14.871 "uuid": "4f70fad5-47e4-5b37-8b1b-a3f30e6c1f69", 00:24:14.871 "is_configured": true, 00:24:14.871 "data_offset": 256, 00:24:14.871 "data_size": 7936 00:24:14.871 }, 00:24:14.871 { 00:24:14.871 "name": "BaseBdev2", 00:24:14.871 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:14.871 "is_configured": true, 00:24:14.871 "data_offset": 256, 00:24:14.871 "data_size": 7936 00:24:14.871 } 00:24:14.871 ] 00:24:14.871 }' 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:14.871 [2024-11-06 09:16:51.394029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:14.871 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.131 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.131 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:15.131 "name": "raid_bdev1", 00:24:15.131 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:15.131 "strip_size_kb": 0, 00:24:15.131 "state": "online", 00:24:15.131 "raid_level": "raid1", 00:24:15.131 "superblock": true, 00:24:15.131 "num_base_bdevs": 2, 00:24:15.131 "num_base_bdevs_discovered": 1, 00:24:15.131 "num_base_bdevs_operational": 1, 00:24:15.131 "base_bdevs_list": [ 00:24:15.131 { 00:24:15.131 "name": null, 00:24:15.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.131 "is_configured": false, 00:24:15.131 "data_offset": 0, 00:24:15.131 "data_size": 7936 00:24:15.131 }, 00:24:15.131 { 00:24:15.131 "name": "BaseBdev2", 00:24:15.131 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:15.131 "is_configured": true, 00:24:15.131 "data_offset": 256, 00:24:15.131 "data_size": 7936 00:24:15.131 } 00:24:15.131 ] 00:24:15.131 }' 00:24:15.131 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:15.131 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:15.389 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:15.389 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.389 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:15.389 [2024-11-06 09:16:51.906198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:15.389 [2024-11-06 09:16:51.906437] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:15.389 [2024-11-06 09:16:51.906464] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:15.389 [2024-11-06 09:16:51.906516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:15.389 [2024-11-06 09:16:51.919215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:24:15.389 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.389 09:16:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:15.389 [2024-11-06 09:16:51.921680] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:16.766 09:16:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.766 09:16:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:16.766 09:16:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:16.766 09:16:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:16.766 09:16:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:16.766 09:16:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.766 09:16:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.766 09:16:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.766 09:16:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:16.766 09:16:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.766 09:16:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:16.766 "name": "raid_bdev1", 00:24:16.766 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:16.766 "strip_size_kb": 0, 00:24:16.766 "state": "online", 00:24:16.766 "raid_level": "raid1", 00:24:16.766 "superblock": true, 00:24:16.766 "num_base_bdevs": 2, 00:24:16.766 "num_base_bdevs_discovered": 2, 00:24:16.766 "num_base_bdevs_operational": 2, 00:24:16.766 "process": { 00:24:16.766 "type": "rebuild", 00:24:16.766 "target": "spare", 00:24:16.766 "progress": { 00:24:16.766 "blocks": 2560, 00:24:16.766 "percent": 32 00:24:16.766 } 00:24:16.766 }, 00:24:16.766 "base_bdevs_list": [ 00:24:16.766 { 00:24:16.766 "name": "spare", 00:24:16.766 "uuid": "4f70fad5-47e4-5b37-8b1b-a3f30e6c1f69", 00:24:16.766 "is_configured": true, 00:24:16.766 "data_offset": 256, 00:24:16.766 "data_size": 7936 00:24:16.766 }, 00:24:16.766 { 00:24:16.766 "name": "BaseBdev2", 00:24:16.766 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:16.766 "is_configured": true, 00:24:16.766 "data_offset": 256, 00:24:16.766 "data_size": 7936 00:24:16.766 } 00:24:16.766 ] 00:24:16.766 }' 00:24:16.766 09:16:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:16.766 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.766 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:16.766 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.766 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:16.766 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.766 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:16.766 [2024-11-06 09:16:53.087695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:16.767 [2024-11-06 09:16:53.131022] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:16.767 [2024-11-06 09:16:53.131133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.767 [2024-11-06 09:16:53.131156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:16.767 [2024-11-06 09:16:53.131190] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:16.767 "name": "raid_bdev1", 00:24:16.767 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:16.767 "strip_size_kb": 0, 00:24:16.767 "state": "online", 00:24:16.767 "raid_level": "raid1", 00:24:16.767 "superblock": true, 00:24:16.767 "num_base_bdevs": 2, 00:24:16.767 "num_base_bdevs_discovered": 1, 00:24:16.767 "num_base_bdevs_operational": 1, 00:24:16.767 "base_bdevs_list": [ 00:24:16.767 { 00:24:16.767 "name": null, 00:24:16.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.767 "is_configured": false, 00:24:16.767 "data_offset": 0, 00:24:16.767 "data_size": 7936 00:24:16.767 }, 00:24:16.767 { 00:24:16.767 "name": "BaseBdev2", 00:24:16.767 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:16.767 "is_configured": true, 00:24:16.767 "data_offset": 256, 00:24:16.767 "data_size": 7936 00:24:16.767 } 00:24:16.767 ] 00:24:16.767 }' 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:16.767 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:17.335 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:17.335 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.335 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:17.335 [2024-11-06 09:16:53.681914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:17.335 [2024-11-06 09:16:53.682145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:17.335 [2024-11-06 09:16:53.682221] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:17.335 [2024-11-06 09:16:53.682476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:17.335 [2024-11-06 09:16:53.682846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:17.335 [2024-11-06 09:16:53.682888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:17.335 [2024-11-06 09:16:53.682986] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:17.335 [2024-11-06 09:16:53.683011] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:17.335 [2024-11-06 09:16:53.683024] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:17.335 [2024-11-06 09:16:53.683054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:17.335 [2024-11-06 09:16:53.695781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:24:17.335 spare 00:24:17.335 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.335 09:16:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:17.335 [2024-11-06 09:16:53.698405] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:18.271 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.271 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:18.271 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:18.271 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:18.271 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:18.271 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.271 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.271 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.271 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:18.271 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.271 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:18.271 "name": "raid_bdev1", 00:24:18.271 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:18.271 "strip_size_kb": 0, 00:24:18.271 "state": "online", 00:24:18.271 "raid_level": "raid1", 00:24:18.271 "superblock": true, 00:24:18.271 "num_base_bdevs": 2, 00:24:18.271 "num_base_bdevs_discovered": 2, 00:24:18.271 "num_base_bdevs_operational": 2, 00:24:18.271 "process": { 00:24:18.271 "type": "rebuild", 00:24:18.271 "target": "spare", 00:24:18.271 "progress": { 00:24:18.271 "blocks": 2560, 00:24:18.271 "percent": 32 00:24:18.271 } 00:24:18.271 }, 00:24:18.271 "base_bdevs_list": [ 00:24:18.271 { 00:24:18.271 "name": "spare", 00:24:18.271 "uuid": "4f70fad5-47e4-5b37-8b1b-a3f30e6c1f69", 00:24:18.271 "is_configured": true, 00:24:18.271 "data_offset": 256, 00:24:18.271 "data_size": 7936 00:24:18.271 }, 00:24:18.271 { 00:24:18.271 "name": "BaseBdev2", 00:24:18.271 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:18.271 "is_configured": true, 00:24:18.271 "data_offset": 256, 00:24:18.271 "data_size": 7936 00:24:18.271 } 00:24:18.271 ] 00:24:18.271 }' 00:24:18.271 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:18.530 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:18.530 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:18.530 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:18.530 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:18.530 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 [2024-11-06 09:16:54.864226] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:18.531 [2024-11-06 09:16:54.907713] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:18.531 [2024-11-06 09:16:54.907946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.531 [2024-11-06 09:16:54.907982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:18.531 [2024-11-06 09:16:54.907994] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:18.531 "name": "raid_bdev1", 00:24:18.531 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:18.531 "strip_size_kb": 0, 00:24:18.531 "state": "online", 00:24:18.531 "raid_level": "raid1", 00:24:18.531 "superblock": true, 00:24:18.531 "num_base_bdevs": 2, 00:24:18.531 "num_base_bdevs_discovered": 1, 00:24:18.531 "num_base_bdevs_operational": 1, 00:24:18.531 "base_bdevs_list": [ 00:24:18.531 { 00:24:18.531 "name": null, 00:24:18.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.531 "is_configured": false, 00:24:18.531 "data_offset": 0, 00:24:18.531 "data_size": 7936 00:24:18.531 }, 00:24:18.531 { 00:24:18.531 "name": "BaseBdev2", 00:24:18.531 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:18.531 "is_configured": true, 00:24:18.531 "data_offset": 256, 00:24:18.531 "data_size": 7936 00:24:18.531 } 00:24:18.531 ] 00:24:18.531 }' 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:18.531 09:16:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:19.098 "name": "raid_bdev1", 00:24:19.098 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:19.098 "strip_size_kb": 0, 00:24:19.098 "state": "online", 00:24:19.098 "raid_level": "raid1", 00:24:19.098 "superblock": true, 00:24:19.098 "num_base_bdevs": 2, 00:24:19.098 "num_base_bdevs_discovered": 1, 00:24:19.098 "num_base_bdevs_operational": 1, 00:24:19.098 "base_bdevs_list": [ 00:24:19.098 { 00:24:19.098 "name": null, 00:24:19.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.098 "is_configured": false, 00:24:19.098 "data_offset": 0, 00:24:19.098 "data_size": 7936 00:24:19.098 }, 00:24:19.098 { 00:24:19.098 "name": "BaseBdev2", 00:24:19.098 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:19.098 "is_configured": true, 00:24:19.098 "data_offset": 256, 00:24:19.098 "data_size": 7936 00:24:19.098 } 00:24:19.098 ] 00:24:19.098 }' 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:19.098 [2024-11-06 09:16:55.626938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:19.098 [2024-11-06 09:16:55.627131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.098 [2024-11-06 09:16:55.627179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:19.098 [2024-11-06 09:16:55.627196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.098 [2024-11-06 09:16:55.627463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.098 [2024-11-06 09:16:55.627486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:19.098 [2024-11-06 09:16:55.627559] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:19.098 [2024-11-06 09:16:55.627578] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:19.098 [2024-11-06 09:16:55.627592] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:19.098 [2024-11-06 09:16:55.627605] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:19.098 BaseBdev1 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.098 09:16:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:20.478 "name": "raid_bdev1", 00:24:20.478 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:20.478 "strip_size_kb": 0, 00:24:20.478 "state": "online", 00:24:20.478 "raid_level": "raid1", 00:24:20.478 "superblock": true, 00:24:20.478 "num_base_bdevs": 2, 00:24:20.478 "num_base_bdevs_discovered": 1, 00:24:20.478 "num_base_bdevs_operational": 1, 00:24:20.478 "base_bdevs_list": [ 00:24:20.478 { 00:24:20.478 "name": null, 00:24:20.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.478 "is_configured": false, 00:24:20.478 "data_offset": 0, 00:24:20.478 "data_size": 7936 00:24:20.478 }, 00:24:20.478 { 00:24:20.478 "name": "BaseBdev2", 00:24:20.478 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:20.478 "is_configured": true, 00:24:20.478 "data_offset": 256, 00:24:20.478 "data_size": 7936 00:24:20.478 } 00:24:20.478 ] 00:24:20.478 }' 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:20.478 09:16:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:20.737 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:20.737 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:20.737 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:20.737 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:20.737 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:20.737 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:20.737 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.737 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.737 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:20.737 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.737 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:20.737 "name": "raid_bdev1", 00:24:20.737 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:20.737 "strip_size_kb": 0, 00:24:20.737 "state": "online", 00:24:20.737 "raid_level": "raid1", 00:24:20.737 "superblock": true, 00:24:20.737 "num_base_bdevs": 2, 00:24:20.737 "num_base_bdevs_discovered": 1, 00:24:20.737 "num_base_bdevs_operational": 1, 00:24:20.737 "base_bdevs_list": [ 00:24:20.737 { 00:24:20.737 "name": null, 00:24:20.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.737 "is_configured": false, 00:24:20.737 "data_offset": 0, 00:24:20.737 "data_size": 7936 00:24:20.737 }, 00:24:20.737 { 00:24:20.737 "name": "BaseBdev2", 00:24:20.737 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:20.737 "is_configured": true, 00:24:20.737 "data_offset": 256, 00:24:20.737 "data_size": 7936 00:24:20.737 } 00:24:20.737 ] 00:24:20.737 }' 00:24:20.737 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.999 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:21.000 [2024-11-06 09:16:57.351562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.000 [2024-11-06 09:16:57.351774] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:21.000 [2024-11-06 09:16:57.351798] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:21.000 request: 00:24:21.000 { 00:24:21.000 "base_bdev": "BaseBdev1", 00:24:21.000 "raid_bdev": "raid_bdev1", 00:24:21.000 "method": "bdev_raid_add_base_bdev", 00:24:21.000 "req_id": 1 00:24:21.000 } 00:24:21.000 Got JSON-RPC error response 00:24:21.000 response: 00:24:21.000 { 00:24:21.000 "code": -22, 00:24:21.000 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:21.000 } 00:24:21.000 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:21.000 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:24:21.000 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:21.000 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:21.000 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:21.000 09:16:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:21.936 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:21.936 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:21.936 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:21.936 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:21.936 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:21.936 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:21.936 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:21.936 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:21.936 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:21.936 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:21.936 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:21.937 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.937 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.937 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:21.937 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.937 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:21.937 "name": "raid_bdev1", 00:24:21.937 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:21.937 "strip_size_kb": 0, 00:24:21.937 "state": "online", 00:24:21.937 "raid_level": "raid1", 00:24:21.937 "superblock": true, 00:24:21.937 "num_base_bdevs": 2, 00:24:21.937 "num_base_bdevs_discovered": 1, 00:24:21.937 "num_base_bdevs_operational": 1, 00:24:21.937 "base_bdevs_list": [ 00:24:21.937 { 00:24:21.937 "name": null, 00:24:21.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.937 "is_configured": false, 00:24:21.937 "data_offset": 0, 00:24:21.937 "data_size": 7936 00:24:21.937 }, 00:24:21.937 { 00:24:21.937 "name": "BaseBdev2", 00:24:21.937 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:21.937 "is_configured": true, 00:24:21.937 "data_offset": 256, 00:24:21.937 "data_size": 7936 00:24:21.937 } 00:24:21.937 ] 00:24:21.937 }' 00:24:21.937 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:21.937 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:22.505 "name": "raid_bdev1", 00:24:22.505 "uuid": "a156958c-576c-416d-9565-457720440f58", 00:24:22.505 "strip_size_kb": 0, 00:24:22.505 "state": "online", 00:24:22.505 "raid_level": "raid1", 00:24:22.505 "superblock": true, 00:24:22.505 "num_base_bdevs": 2, 00:24:22.505 "num_base_bdevs_discovered": 1, 00:24:22.505 "num_base_bdevs_operational": 1, 00:24:22.505 "base_bdevs_list": [ 00:24:22.505 { 00:24:22.505 "name": null, 00:24:22.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:22.505 "is_configured": false, 00:24:22.505 "data_offset": 0, 00:24:22.505 "data_size": 7936 00:24:22.505 }, 00:24:22.505 { 00:24:22.505 "name": "BaseBdev2", 00:24:22.505 "uuid": "8c09e652-67fb-596e-88f0-e943fcdae3bf", 00:24:22.505 "is_configured": true, 00:24:22.505 "data_offset": 256, 00:24:22.505 "data_size": 7936 00:24:22.505 } 00:24:22.505 ] 00:24:22.505 }' 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:22.505 09:16:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:22.764 09:16:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:22.764 09:16:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88117 00:24:22.764 09:16:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 88117 ']' 00:24:22.764 09:16:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 88117 00:24:22.764 09:16:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:24:22.764 09:16:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:22.764 09:16:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88117 00:24:22.764 killing process with pid 88117 00:24:22.764 Received shutdown signal, test time was about 60.000000 seconds 00:24:22.764 00:24:22.764 Latency(us) 00:24:22.764 [2024-11-06T09:16:59.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.764 [2024-11-06T09:16:59.299Z] =================================================================================================================== 00:24:22.764 [2024-11-06T09:16:59.299Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:22.764 09:16:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:22.764 09:16:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:22.764 09:16:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88117' 00:24:22.764 09:16:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 88117 00:24:22.764 [2024-11-06 09:16:59.077944] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:22.764 09:16:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 88117 00:24:22.764 [2024-11-06 09:16:59.078105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:22.764 [2024-11-06 09:16:59.078167] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:22.764 [2024-11-06 09:16:59.078185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:23.023 [2024-11-06 09:16:59.374308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:23.986 09:17:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:24:23.986 00:24:23.986 real 0m21.782s 00:24:23.986 user 0m29.556s 00:24:23.986 sys 0m2.558s 00:24:23.986 09:17:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:23.986 09:17:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:24:23.986 ************************************ 00:24:23.986 END TEST raid_rebuild_test_sb_md_separate 00:24:23.986 ************************************ 00:24:23.986 09:17:00 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:24:23.986 09:17:00 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:24:23.986 09:17:00 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:24:23.986 09:17:00 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:23.986 09:17:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:23.986 ************************************ 00:24:23.986 START TEST raid_state_function_test_sb_md_interleaved 00:24:23.986 ************************************ 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:24:23.986 Process raid pid: 88820 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88820 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88820' 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88820 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 88820 ']' 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:23.986 09:17:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:24.246 [2024-11-06 09:17:00.571854] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:24:24.246 [2024-11-06 09:17:00.572268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.246 [2024-11-06 09:17:00.752426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.505 [2024-11-06 09:17:00.882983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.764 [2024-11-06 09:17:01.099688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:24.764 [2024-11-06 09:17:01.099742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:25.023 [2024-11-06 09:17:01.544513] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:25.023 [2024-11-06 09:17:01.544581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:25.023 [2024-11-06 09:17:01.544600] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:25.023 [2024-11-06 09:17:01.544616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:25.023 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.281 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.281 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.281 "name": "Existed_Raid", 00:24:25.281 "uuid": "6ba9179d-9766-4736-8d67-442113f8b079", 00:24:25.281 "strip_size_kb": 0, 00:24:25.281 "state": "configuring", 00:24:25.281 "raid_level": "raid1", 00:24:25.281 "superblock": true, 00:24:25.281 "num_base_bdevs": 2, 00:24:25.281 "num_base_bdevs_discovered": 0, 00:24:25.281 "num_base_bdevs_operational": 2, 00:24:25.281 "base_bdevs_list": [ 00:24:25.281 { 00:24:25.281 "name": "BaseBdev1", 00:24:25.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.281 "is_configured": false, 00:24:25.281 "data_offset": 0, 00:24:25.281 "data_size": 0 00:24:25.281 }, 00:24:25.281 { 00:24:25.281 "name": "BaseBdev2", 00:24:25.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.281 "is_configured": false, 00:24:25.281 "data_offset": 0, 00:24:25.281 "data_size": 0 00:24:25.281 } 00:24:25.281 ] 00:24:25.281 }' 00:24:25.281 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.281 09:17:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:25.541 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:25.541 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.541 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:25.541 [2024-11-06 09:17:02.060560] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:25.541 [2024-11-06 09:17:02.060602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:25.541 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.541 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:25.541 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.541 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:25.541 [2024-11-06 09:17:02.068540] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:25.541 [2024-11-06 09:17:02.068595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:25.541 [2024-11-06 09:17:02.068611] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:25.541 [2024-11-06 09:17:02.068630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:25.541 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.541 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:24:25.541 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.541 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:25.800 [2024-11-06 09:17:02.114424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:25.800 BaseBdev1 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:25.800 [ 00:24:25.800 { 00:24:25.800 "name": "BaseBdev1", 00:24:25.800 "aliases": [ 00:24:25.800 "4fec7db7-d4a0-4e35-be1b-ff59b650bc00" 00:24:25.800 ], 00:24:25.800 "product_name": "Malloc disk", 00:24:25.800 "block_size": 4128, 00:24:25.800 "num_blocks": 8192, 00:24:25.800 "uuid": "4fec7db7-d4a0-4e35-be1b-ff59b650bc00", 00:24:25.800 "md_size": 32, 00:24:25.800 "md_interleave": true, 00:24:25.800 "dif_type": 0, 00:24:25.800 "assigned_rate_limits": { 00:24:25.800 "rw_ios_per_sec": 0, 00:24:25.800 "rw_mbytes_per_sec": 0, 00:24:25.800 "r_mbytes_per_sec": 0, 00:24:25.800 "w_mbytes_per_sec": 0 00:24:25.800 }, 00:24:25.800 "claimed": true, 00:24:25.800 "claim_type": "exclusive_write", 00:24:25.800 "zoned": false, 00:24:25.800 "supported_io_types": { 00:24:25.800 "read": true, 00:24:25.800 "write": true, 00:24:25.800 "unmap": true, 00:24:25.800 "flush": true, 00:24:25.800 "reset": true, 00:24:25.800 "nvme_admin": false, 00:24:25.800 "nvme_io": false, 00:24:25.800 "nvme_io_md": false, 00:24:25.800 "write_zeroes": true, 00:24:25.800 "zcopy": true, 00:24:25.800 "get_zone_info": false, 00:24:25.800 "zone_management": false, 00:24:25.800 "zone_append": false, 00:24:25.800 "compare": false, 00:24:25.800 "compare_and_write": false, 00:24:25.800 "abort": true, 00:24:25.800 "seek_hole": false, 00:24:25.800 "seek_data": false, 00:24:25.800 "copy": true, 00:24:25.800 "nvme_iov_md": false 00:24:25.800 }, 00:24:25.800 "memory_domains": [ 00:24:25.800 { 00:24:25.800 "dma_device_id": "system", 00:24:25.800 "dma_device_type": 1 00:24:25.800 }, 00:24:25.800 { 00:24:25.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.800 "dma_device_type": 2 00:24:25.800 } 00:24:25.800 ], 00:24:25.800 "driver_specific": {} 00:24:25.800 } 00:24:25.800 ] 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:25.800 "name": "Existed_Raid", 00:24:25.800 "uuid": "80835ff5-a7fb-4cad-bfcc-522f7036b11d", 00:24:25.800 "strip_size_kb": 0, 00:24:25.800 "state": "configuring", 00:24:25.800 "raid_level": "raid1", 00:24:25.800 "superblock": true, 00:24:25.800 "num_base_bdevs": 2, 00:24:25.800 "num_base_bdevs_discovered": 1, 00:24:25.800 "num_base_bdevs_operational": 2, 00:24:25.800 "base_bdevs_list": [ 00:24:25.800 { 00:24:25.800 "name": "BaseBdev1", 00:24:25.800 "uuid": "4fec7db7-d4a0-4e35-be1b-ff59b650bc00", 00:24:25.800 "is_configured": true, 00:24:25.800 "data_offset": 256, 00:24:25.800 "data_size": 7936 00:24:25.800 }, 00:24:25.800 { 00:24:25.800 "name": "BaseBdev2", 00:24:25.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:25.800 "is_configured": false, 00:24:25.800 "data_offset": 0, 00:24:25.800 "data_size": 0 00:24:25.800 } 00:24:25.800 ] 00:24:25.800 }' 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:25.800 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:26.368 [2024-11-06 09:17:02.698675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:26.368 [2024-11-06 09:17:02.698893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:26.368 [2024-11-06 09:17:02.706730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:26.368 [2024-11-06 09:17:02.709200] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:26.368 [2024-11-06 09:17:02.709285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:26.368 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.369 "name": "Existed_Raid", 00:24:26.369 "uuid": "45877697-0c50-4368-9501-174b3e321a6b", 00:24:26.369 "strip_size_kb": 0, 00:24:26.369 "state": "configuring", 00:24:26.369 "raid_level": "raid1", 00:24:26.369 "superblock": true, 00:24:26.369 "num_base_bdevs": 2, 00:24:26.369 "num_base_bdevs_discovered": 1, 00:24:26.369 "num_base_bdevs_operational": 2, 00:24:26.369 "base_bdevs_list": [ 00:24:26.369 { 00:24:26.369 "name": "BaseBdev1", 00:24:26.369 "uuid": "4fec7db7-d4a0-4e35-be1b-ff59b650bc00", 00:24:26.369 "is_configured": true, 00:24:26.369 "data_offset": 256, 00:24:26.369 "data_size": 7936 00:24:26.369 }, 00:24:26.369 { 00:24:26.369 "name": "BaseBdev2", 00:24:26.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.369 "is_configured": false, 00:24:26.369 "data_offset": 0, 00:24:26.369 "data_size": 0 00:24:26.369 } 00:24:26.369 ] 00:24:26.369 }' 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.369 09:17:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:26.994 [2024-11-06 09:17:03.273514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:26.994 [2024-11-06 09:17:03.273937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:26.994 [2024-11-06 09:17:03.274082] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:26.994 [2024-11-06 09:17:03.274240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:26.994 BaseBdev2 00:24:26.994 [2024-11-06 09:17:03.274455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:26.994 [2024-11-06 09:17:03.274485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:26.994 [2024-11-06 09:17:03.274573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:26.994 [ 00:24:26.994 { 00:24:26.994 "name": "BaseBdev2", 00:24:26.994 "aliases": [ 00:24:26.994 "a6dcd909-4345-4aed-9fe2-21e7c013fab7" 00:24:26.994 ], 00:24:26.994 "product_name": "Malloc disk", 00:24:26.994 "block_size": 4128, 00:24:26.994 "num_blocks": 8192, 00:24:26.994 "uuid": "a6dcd909-4345-4aed-9fe2-21e7c013fab7", 00:24:26.994 "md_size": 32, 00:24:26.994 "md_interleave": true, 00:24:26.994 "dif_type": 0, 00:24:26.994 "assigned_rate_limits": { 00:24:26.994 "rw_ios_per_sec": 0, 00:24:26.994 "rw_mbytes_per_sec": 0, 00:24:26.994 "r_mbytes_per_sec": 0, 00:24:26.994 "w_mbytes_per_sec": 0 00:24:26.994 }, 00:24:26.994 "claimed": true, 00:24:26.994 "claim_type": "exclusive_write", 00:24:26.994 "zoned": false, 00:24:26.994 "supported_io_types": { 00:24:26.994 "read": true, 00:24:26.994 "write": true, 00:24:26.994 "unmap": true, 00:24:26.994 "flush": true, 00:24:26.994 "reset": true, 00:24:26.994 "nvme_admin": false, 00:24:26.994 "nvme_io": false, 00:24:26.994 "nvme_io_md": false, 00:24:26.994 "write_zeroes": true, 00:24:26.994 "zcopy": true, 00:24:26.994 "get_zone_info": false, 00:24:26.994 "zone_management": false, 00:24:26.994 "zone_append": false, 00:24:26.994 "compare": false, 00:24:26.994 "compare_and_write": false, 00:24:26.994 "abort": true, 00:24:26.994 "seek_hole": false, 00:24:26.994 "seek_data": false, 00:24:26.994 "copy": true, 00:24:26.994 "nvme_iov_md": false 00:24:26.994 }, 00:24:26.994 "memory_domains": [ 00:24:26.994 { 00:24:26.994 "dma_device_id": "system", 00:24:26.994 "dma_device_type": 1 00:24:26.994 }, 00:24:26.994 { 00:24:26.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.994 "dma_device_type": 2 00:24:26.994 } 00:24:26.994 ], 00:24:26.994 "driver_specific": {} 00:24:26.994 } 00:24:26.994 ] 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:26.994 "name": "Existed_Raid", 00:24:26.994 "uuid": "45877697-0c50-4368-9501-174b3e321a6b", 00:24:26.994 "strip_size_kb": 0, 00:24:26.994 "state": "online", 00:24:26.994 "raid_level": "raid1", 00:24:26.994 "superblock": true, 00:24:26.994 "num_base_bdevs": 2, 00:24:26.994 "num_base_bdevs_discovered": 2, 00:24:26.994 "num_base_bdevs_operational": 2, 00:24:26.994 "base_bdevs_list": [ 00:24:26.994 { 00:24:26.994 "name": "BaseBdev1", 00:24:26.994 "uuid": "4fec7db7-d4a0-4e35-be1b-ff59b650bc00", 00:24:26.994 "is_configured": true, 00:24:26.994 "data_offset": 256, 00:24:26.994 "data_size": 7936 00:24:26.994 }, 00:24:26.994 { 00:24:26.994 "name": "BaseBdev2", 00:24:26.994 "uuid": "a6dcd909-4345-4aed-9fe2-21e7c013fab7", 00:24:26.994 "is_configured": true, 00:24:26.994 "data_offset": 256, 00:24:26.994 "data_size": 7936 00:24:26.994 } 00:24:26.994 ] 00:24:26.994 }' 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:26.994 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:27.560 [2024-11-06 09:17:03.854157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:27.560 "name": "Existed_Raid", 00:24:27.560 "aliases": [ 00:24:27.560 "45877697-0c50-4368-9501-174b3e321a6b" 00:24:27.560 ], 00:24:27.560 "product_name": "Raid Volume", 00:24:27.560 "block_size": 4128, 00:24:27.560 "num_blocks": 7936, 00:24:27.560 "uuid": "45877697-0c50-4368-9501-174b3e321a6b", 00:24:27.560 "md_size": 32, 00:24:27.560 "md_interleave": true, 00:24:27.560 "dif_type": 0, 00:24:27.560 "assigned_rate_limits": { 00:24:27.560 "rw_ios_per_sec": 0, 00:24:27.560 "rw_mbytes_per_sec": 0, 00:24:27.560 "r_mbytes_per_sec": 0, 00:24:27.560 "w_mbytes_per_sec": 0 00:24:27.560 }, 00:24:27.560 "claimed": false, 00:24:27.560 "zoned": false, 00:24:27.560 "supported_io_types": { 00:24:27.560 "read": true, 00:24:27.560 "write": true, 00:24:27.560 "unmap": false, 00:24:27.560 "flush": false, 00:24:27.560 "reset": true, 00:24:27.560 "nvme_admin": false, 00:24:27.560 "nvme_io": false, 00:24:27.560 "nvme_io_md": false, 00:24:27.560 "write_zeroes": true, 00:24:27.560 "zcopy": false, 00:24:27.560 "get_zone_info": false, 00:24:27.560 "zone_management": false, 00:24:27.560 "zone_append": false, 00:24:27.560 "compare": false, 00:24:27.560 "compare_and_write": false, 00:24:27.560 "abort": false, 00:24:27.560 "seek_hole": false, 00:24:27.560 "seek_data": false, 00:24:27.560 "copy": false, 00:24:27.560 "nvme_iov_md": false 00:24:27.560 }, 00:24:27.560 "memory_domains": [ 00:24:27.560 { 00:24:27.560 "dma_device_id": "system", 00:24:27.560 "dma_device_type": 1 00:24:27.560 }, 00:24:27.560 { 00:24:27.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.560 "dma_device_type": 2 00:24:27.560 }, 00:24:27.560 { 00:24:27.560 "dma_device_id": "system", 00:24:27.560 "dma_device_type": 1 00:24:27.560 }, 00:24:27.560 { 00:24:27.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:27.560 "dma_device_type": 2 00:24:27.560 } 00:24:27.560 ], 00:24:27.560 "driver_specific": { 00:24:27.560 "raid": { 00:24:27.560 "uuid": "45877697-0c50-4368-9501-174b3e321a6b", 00:24:27.560 "strip_size_kb": 0, 00:24:27.560 "state": "online", 00:24:27.560 "raid_level": "raid1", 00:24:27.560 "superblock": true, 00:24:27.560 "num_base_bdevs": 2, 00:24:27.560 "num_base_bdevs_discovered": 2, 00:24:27.560 "num_base_bdevs_operational": 2, 00:24:27.560 "base_bdevs_list": [ 00:24:27.560 { 00:24:27.560 "name": "BaseBdev1", 00:24:27.560 "uuid": "4fec7db7-d4a0-4e35-be1b-ff59b650bc00", 00:24:27.560 "is_configured": true, 00:24:27.560 "data_offset": 256, 00:24:27.560 "data_size": 7936 00:24:27.560 }, 00:24:27.560 { 00:24:27.560 "name": "BaseBdev2", 00:24:27.560 "uuid": "a6dcd909-4345-4aed-9fe2-21e7c013fab7", 00:24:27.560 "is_configured": true, 00:24:27.560 "data_offset": 256, 00:24:27.560 "data_size": 7936 00:24:27.560 } 00:24:27.560 ] 00:24:27.560 } 00:24:27.560 } 00:24:27.560 }' 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:24:27.560 BaseBdev2' 00:24:27.560 09:17:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:27.560 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:27.819 [2024-11-06 09:17:04.117872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:27.819 "name": "Existed_Raid", 00:24:27.819 "uuid": "45877697-0c50-4368-9501-174b3e321a6b", 00:24:27.819 "strip_size_kb": 0, 00:24:27.819 "state": "online", 00:24:27.819 "raid_level": "raid1", 00:24:27.819 "superblock": true, 00:24:27.819 "num_base_bdevs": 2, 00:24:27.819 "num_base_bdevs_discovered": 1, 00:24:27.819 "num_base_bdevs_operational": 1, 00:24:27.819 "base_bdevs_list": [ 00:24:27.819 { 00:24:27.819 "name": null, 00:24:27.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.819 "is_configured": false, 00:24:27.819 "data_offset": 0, 00:24:27.819 "data_size": 7936 00:24:27.819 }, 00:24:27.819 { 00:24:27.819 "name": "BaseBdev2", 00:24:27.819 "uuid": "a6dcd909-4345-4aed-9fe2-21e7c013fab7", 00:24:27.819 "is_configured": true, 00:24:27.819 "data_offset": 256, 00:24:27.819 "data_size": 7936 00:24:27.819 } 00:24:27.819 ] 00:24:27.819 }' 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:27.819 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:28.386 [2024-11-06 09:17:04.787918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:28.386 [2024-11-06 09:17:04.788060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:28.386 [2024-11-06 09:17:04.873276] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:28.386 [2024-11-06 09:17:04.873364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:28.386 [2024-11-06 09:17:04.873407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:28.386 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88820 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 88820 ']' 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 88820 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88820 00:24:28.646 killing process with pid 88820 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88820' 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 88820 00:24:28.646 09:17:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 88820 00:24:28.646 [2024-11-06 09:17:04.964783] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:28.646 [2024-11-06 09:17:04.980401] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:29.581 ************************************ 00:24:29.581 END TEST raid_state_function_test_sb_md_interleaved 00:24:29.581 ************************************ 00:24:29.581 09:17:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:24:29.581 00:24:29.581 real 0m5.571s 00:24:29.581 user 0m8.417s 00:24:29.581 sys 0m0.825s 00:24:29.581 09:17:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:29.581 09:17:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 09:17:06 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:24:29.581 09:17:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:24:29.581 09:17:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:29.581 09:17:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:29.581 ************************************ 00:24:29.581 START TEST raid_superblock_test_md_interleaved 00:24:29.581 ************************************ 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89078 00:24:29.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89078 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89078 ']' 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:29.581 09:17:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:29.839 [2024-11-06 09:17:06.188582] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:24:29.839 [2024-11-06 09:17:06.188767] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89078 ] 00:24:29.839 [2024-11-06 09:17:06.366658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.097 [2024-11-06 09:17:06.495815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.354 [2024-11-06 09:17:06.701328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:30.354 [2024-11-06 09:17:06.701389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.612 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:30.871 malloc1 00:24:30.871 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.871 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:30.871 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.871 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:30.871 [2024-11-06 09:17:07.182858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:30.871 [2024-11-06 09:17:07.183065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.871 [2024-11-06 09:17:07.183148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:30.871 [2024-11-06 09:17:07.183269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.871 [2024-11-06 09:17:07.185789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.871 [2024-11-06 09:17:07.185985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:30.871 pt1 00:24:30.871 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.871 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:30.871 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:30.871 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:24:30.871 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:30.872 malloc2 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:30.872 [2024-11-06 09:17:07.234601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:30.872 [2024-11-06 09:17:07.234680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.872 [2024-11-06 09:17:07.234713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:30.872 [2024-11-06 09:17:07.234726] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.872 [2024-11-06 09:17:07.237157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.872 [2024-11-06 09:17:07.237201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:30.872 pt2 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:30.872 [2024-11-06 09:17:07.242656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:30.872 [2024-11-06 09:17:07.245195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:30.872 [2024-11-06 09:17:07.245547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:30.872 [2024-11-06 09:17:07.245692] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:30.872 [2024-11-06 09:17:07.245853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:24:30.872 [2024-11-06 09:17:07.246109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:30.872 [2024-11-06 09:17:07.246240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:30.872 [2024-11-06 09:17:07.246559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:30.872 "name": "raid_bdev1", 00:24:30.872 "uuid": "b231a403-8a99-43a6-a008-84ae39bfd232", 00:24:30.872 "strip_size_kb": 0, 00:24:30.872 "state": "online", 00:24:30.872 "raid_level": "raid1", 00:24:30.872 "superblock": true, 00:24:30.872 "num_base_bdevs": 2, 00:24:30.872 "num_base_bdevs_discovered": 2, 00:24:30.872 "num_base_bdevs_operational": 2, 00:24:30.872 "base_bdevs_list": [ 00:24:30.872 { 00:24:30.872 "name": "pt1", 00:24:30.872 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:30.872 "is_configured": true, 00:24:30.872 "data_offset": 256, 00:24:30.872 "data_size": 7936 00:24:30.872 }, 00:24:30.872 { 00:24:30.872 "name": "pt2", 00:24:30.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:30.872 "is_configured": true, 00:24:30.872 "data_offset": 256, 00:24:30.872 "data_size": 7936 00:24:30.872 } 00:24:30.872 ] 00:24:30.872 }' 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:30.872 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:31.469 [2024-11-06 09:17:07.751192] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:31.469 "name": "raid_bdev1", 00:24:31.469 "aliases": [ 00:24:31.469 "b231a403-8a99-43a6-a008-84ae39bfd232" 00:24:31.469 ], 00:24:31.469 "product_name": "Raid Volume", 00:24:31.469 "block_size": 4128, 00:24:31.469 "num_blocks": 7936, 00:24:31.469 "uuid": "b231a403-8a99-43a6-a008-84ae39bfd232", 00:24:31.469 "md_size": 32, 00:24:31.469 "md_interleave": true, 00:24:31.469 "dif_type": 0, 00:24:31.469 "assigned_rate_limits": { 00:24:31.469 "rw_ios_per_sec": 0, 00:24:31.469 "rw_mbytes_per_sec": 0, 00:24:31.469 "r_mbytes_per_sec": 0, 00:24:31.469 "w_mbytes_per_sec": 0 00:24:31.469 }, 00:24:31.469 "claimed": false, 00:24:31.469 "zoned": false, 00:24:31.469 "supported_io_types": { 00:24:31.469 "read": true, 00:24:31.469 "write": true, 00:24:31.469 "unmap": false, 00:24:31.469 "flush": false, 00:24:31.469 "reset": true, 00:24:31.469 "nvme_admin": false, 00:24:31.469 "nvme_io": false, 00:24:31.469 "nvme_io_md": false, 00:24:31.469 "write_zeroes": true, 00:24:31.469 "zcopy": false, 00:24:31.469 "get_zone_info": false, 00:24:31.469 "zone_management": false, 00:24:31.469 "zone_append": false, 00:24:31.469 "compare": false, 00:24:31.469 "compare_and_write": false, 00:24:31.469 "abort": false, 00:24:31.469 "seek_hole": false, 00:24:31.469 "seek_data": false, 00:24:31.469 "copy": false, 00:24:31.469 "nvme_iov_md": false 00:24:31.469 }, 00:24:31.469 "memory_domains": [ 00:24:31.469 { 00:24:31.469 "dma_device_id": "system", 00:24:31.469 "dma_device_type": 1 00:24:31.469 }, 00:24:31.469 { 00:24:31.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.469 "dma_device_type": 2 00:24:31.469 }, 00:24:31.469 { 00:24:31.469 "dma_device_id": "system", 00:24:31.469 "dma_device_type": 1 00:24:31.469 }, 00:24:31.469 { 00:24:31.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.469 "dma_device_type": 2 00:24:31.469 } 00:24:31.469 ], 00:24:31.469 "driver_specific": { 00:24:31.469 "raid": { 00:24:31.469 "uuid": "b231a403-8a99-43a6-a008-84ae39bfd232", 00:24:31.469 "strip_size_kb": 0, 00:24:31.469 "state": "online", 00:24:31.469 "raid_level": "raid1", 00:24:31.469 "superblock": true, 00:24:31.469 "num_base_bdevs": 2, 00:24:31.469 "num_base_bdevs_discovered": 2, 00:24:31.469 "num_base_bdevs_operational": 2, 00:24:31.469 "base_bdevs_list": [ 00:24:31.469 { 00:24:31.469 "name": "pt1", 00:24:31.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:31.469 "is_configured": true, 00:24:31.469 "data_offset": 256, 00:24:31.469 "data_size": 7936 00:24:31.469 }, 00:24:31.469 { 00:24:31.469 "name": "pt2", 00:24:31.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:31.469 "is_configured": true, 00:24:31.469 "data_offset": 256, 00:24:31.469 "data_size": 7936 00:24:31.469 } 00:24:31.469 ] 00:24:31.469 } 00:24:31.469 } 00:24:31.469 }' 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:31.469 pt2' 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:31.469 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.470 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:24:31.470 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:24:31.470 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:31.470 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:31.470 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.470 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.470 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:31.470 09:17:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.727 [2024-11-06 09:17:08.019214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b231a403-8a99-43a6-a008-84ae39bfd232 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z b231a403-8a99-43a6-a008-84ae39bfd232 ']' 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.727 [2024-11-06 09:17:08.062851] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:31.727 [2024-11-06 09:17:08.063017] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:31.727 [2024-11-06 09:17:08.063236] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:31.727 [2024-11-06 09:17:08.063416] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:31.727 [2024-11-06 09:17:08.063549] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:24:31.727 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.728 [2024-11-06 09:17:08.190904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:31.728 [2024-11-06 09:17:08.193466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:31.728 [2024-11-06 09:17:08.193571] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:31.728 [2024-11-06 09:17:08.193654] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:31.728 [2024-11-06 09:17:08.193681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:31.728 [2024-11-06 09:17:08.193696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:31.728 request: 00:24:31.728 { 00:24:31.728 "name": "raid_bdev1", 00:24:31.728 "raid_level": "raid1", 00:24:31.728 "base_bdevs": [ 00:24:31.728 "malloc1", 00:24:31.728 "malloc2" 00:24:31.728 ], 00:24:31.728 "superblock": false, 00:24:31.728 "method": "bdev_raid_create", 00:24:31.728 "req_id": 1 00:24:31.728 } 00:24:31.728 Got JSON-RPC error response 00:24:31.728 response: 00:24:31.728 { 00:24:31.728 "code": -17, 00:24:31.728 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:31.728 } 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.728 [2024-11-06 09:17:08.254909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:31.728 [2024-11-06 09:17:08.255104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.728 [2024-11-06 09:17:08.255252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:31.728 [2024-11-06 09:17:08.255370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.728 [2024-11-06 09:17:08.258010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.728 [2024-11-06 09:17:08.258174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:31.728 [2024-11-06 09:17:08.258356] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:31.728 [2024-11-06 09:17:08.258548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:31.728 pt1 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.728 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:31.985 "name": "raid_bdev1", 00:24:31.985 "uuid": "b231a403-8a99-43a6-a008-84ae39bfd232", 00:24:31.985 "strip_size_kb": 0, 00:24:31.985 "state": "configuring", 00:24:31.985 "raid_level": "raid1", 00:24:31.985 "superblock": true, 00:24:31.985 "num_base_bdevs": 2, 00:24:31.985 "num_base_bdevs_discovered": 1, 00:24:31.985 "num_base_bdevs_operational": 2, 00:24:31.985 "base_bdevs_list": [ 00:24:31.985 { 00:24:31.985 "name": "pt1", 00:24:31.985 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:31.985 "is_configured": true, 00:24:31.985 "data_offset": 256, 00:24:31.985 "data_size": 7936 00:24:31.985 }, 00:24:31.985 { 00:24:31.985 "name": null, 00:24:31.985 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:31.985 "is_configured": false, 00:24:31.985 "data_offset": 256, 00:24:31.985 "data_size": 7936 00:24:31.985 } 00:24:31.985 ] 00:24:31.985 }' 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:31.985 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:32.243 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:24:32.243 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:24:32.243 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:32.244 [2024-11-06 09:17:08.767057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:32.244 [2024-11-06 09:17:08.767281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:32.244 [2024-11-06 09:17:08.767322] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:32.244 [2024-11-06 09:17:08.767340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:32.244 [2024-11-06 09:17:08.767561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:32.244 [2024-11-06 09:17:08.767589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:32.244 [2024-11-06 09:17:08.767655] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:32.244 [2024-11-06 09:17:08.767693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:32.244 [2024-11-06 09:17:08.767811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:32.244 [2024-11-06 09:17:08.767852] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:32.244 [2024-11-06 09:17:08.767950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:32.244 [2024-11-06 09:17:08.768050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:32.244 [2024-11-06 09:17:08.768066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:32.244 [2024-11-06 09:17:08.768152] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.244 pt2 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:32.244 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:32.502 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:32.502 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:32.502 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.502 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:32.502 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:32.502 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:32.502 "name": "raid_bdev1", 00:24:32.502 "uuid": "b231a403-8a99-43a6-a008-84ae39bfd232", 00:24:32.502 "strip_size_kb": 0, 00:24:32.502 "state": "online", 00:24:32.502 "raid_level": "raid1", 00:24:32.502 "superblock": true, 00:24:32.502 "num_base_bdevs": 2, 00:24:32.502 "num_base_bdevs_discovered": 2, 00:24:32.502 "num_base_bdevs_operational": 2, 00:24:32.502 "base_bdevs_list": [ 00:24:32.502 { 00:24:32.502 "name": "pt1", 00:24:32.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:32.502 "is_configured": true, 00:24:32.502 "data_offset": 256, 00:24:32.502 "data_size": 7936 00:24:32.502 }, 00:24:32.502 { 00:24:32.502 "name": "pt2", 00:24:32.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:32.502 "is_configured": true, 00:24:32.502 "data_offset": 256, 00:24:32.502 "data_size": 7936 00:24:32.502 } 00:24:32.502 ] 00:24:32.502 }' 00:24:32.502 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:32.502 09:17:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:32.760 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:24:32.760 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:24:32.760 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:24:32.760 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:24:32.760 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:24:32.760 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:24:32.760 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:32.760 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.760 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:24:32.760 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:32.760 [2024-11-06 09:17:09.291529] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:24:33.019 "name": "raid_bdev1", 00:24:33.019 "aliases": [ 00:24:33.019 "b231a403-8a99-43a6-a008-84ae39bfd232" 00:24:33.019 ], 00:24:33.019 "product_name": "Raid Volume", 00:24:33.019 "block_size": 4128, 00:24:33.019 "num_blocks": 7936, 00:24:33.019 "uuid": "b231a403-8a99-43a6-a008-84ae39bfd232", 00:24:33.019 "md_size": 32, 00:24:33.019 "md_interleave": true, 00:24:33.019 "dif_type": 0, 00:24:33.019 "assigned_rate_limits": { 00:24:33.019 "rw_ios_per_sec": 0, 00:24:33.019 "rw_mbytes_per_sec": 0, 00:24:33.019 "r_mbytes_per_sec": 0, 00:24:33.019 "w_mbytes_per_sec": 0 00:24:33.019 }, 00:24:33.019 "claimed": false, 00:24:33.019 "zoned": false, 00:24:33.019 "supported_io_types": { 00:24:33.019 "read": true, 00:24:33.019 "write": true, 00:24:33.019 "unmap": false, 00:24:33.019 "flush": false, 00:24:33.019 "reset": true, 00:24:33.019 "nvme_admin": false, 00:24:33.019 "nvme_io": false, 00:24:33.019 "nvme_io_md": false, 00:24:33.019 "write_zeroes": true, 00:24:33.019 "zcopy": false, 00:24:33.019 "get_zone_info": false, 00:24:33.019 "zone_management": false, 00:24:33.019 "zone_append": false, 00:24:33.019 "compare": false, 00:24:33.019 "compare_and_write": false, 00:24:33.019 "abort": false, 00:24:33.019 "seek_hole": false, 00:24:33.019 "seek_data": false, 00:24:33.019 "copy": false, 00:24:33.019 "nvme_iov_md": false 00:24:33.019 }, 00:24:33.019 "memory_domains": [ 00:24:33.019 { 00:24:33.019 "dma_device_id": "system", 00:24:33.019 "dma_device_type": 1 00:24:33.019 }, 00:24:33.019 { 00:24:33.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.019 "dma_device_type": 2 00:24:33.019 }, 00:24:33.019 { 00:24:33.019 "dma_device_id": "system", 00:24:33.019 "dma_device_type": 1 00:24:33.019 }, 00:24:33.019 { 00:24:33.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.019 "dma_device_type": 2 00:24:33.019 } 00:24:33.019 ], 00:24:33.019 "driver_specific": { 00:24:33.019 "raid": { 00:24:33.019 "uuid": "b231a403-8a99-43a6-a008-84ae39bfd232", 00:24:33.019 "strip_size_kb": 0, 00:24:33.019 "state": "online", 00:24:33.019 "raid_level": "raid1", 00:24:33.019 "superblock": true, 00:24:33.019 "num_base_bdevs": 2, 00:24:33.019 "num_base_bdevs_discovered": 2, 00:24:33.019 "num_base_bdevs_operational": 2, 00:24:33.019 "base_bdevs_list": [ 00:24:33.019 { 00:24:33.019 "name": "pt1", 00:24:33.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:33.019 "is_configured": true, 00:24:33.019 "data_offset": 256, 00:24:33.019 "data_size": 7936 00:24:33.019 }, 00:24:33.019 { 00:24:33.019 "name": "pt2", 00:24:33.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:33.019 "is_configured": true, 00:24:33.019 "data_offset": 256, 00:24:33.019 "data_size": 7936 00:24:33.019 } 00:24:33.019 ] 00:24:33.019 } 00:24:33.019 } 00:24:33.019 }' 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:24:33.019 pt2' 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:33.019 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:24:33.281 [2024-11-06 09:17:09.555577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' b231a403-8a99-43a6-a008-84ae39bfd232 '!=' b231a403-8a99-43a6-a008-84ae39bfd232 ']' 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:33.281 [2024-11-06 09:17:09.599293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:33.281 "name": "raid_bdev1", 00:24:33.281 "uuid": "b231a403-8a99-43a6-a008-84ae39bfd232", 00:24:33.281 "strip_size_kb": 0, 00:24:33.281 "state": "online", 00:24:33.281 "raid_level": "raid1", 00:24:33.281 "superblock": true, 00:24:33.281 "num_base_bdevs": 2, 00:24:33.281 "num_base_bdevs_discovered": 1, 00:24:33.281 "num_base_bdevs_operational": 1, 00:24:33.281 "base_bdevs_list": [ 00:24:33.281 { 00:24:33.281 "name": null, 00:24:33.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.281 "is_configured": false, 00:24:33.281 "data_offset": 0, 00:24:33.281 "data_size": 7936 00:24:33.281 }, 00:24:33.281 { 00:24:33.281 "name": "pt2", 00:24:33.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:33.281 "is_configured": true, 00:24:33.281 "data_offset": 256, 00:24:33.281 "data_size": 7936 00:24:33.281 } 00:24:33.281 ] 00:24:33.281 }' 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:33.281 09:17:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:33.847 [2024-11-06 09:17:10.131418] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:33.847 [2024-11-06 09:17:10.131582] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:33.847 [2024-11-06 09:17:10.131697] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.847 [2024-11-06 09:17:10.131762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.847 [2024-11-06 09:17:10.131782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:33.847 [2024-11-06 09:17:10.199471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:33.847 [2024-11-06 09:17:10.199538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:33.847 [2024-11-06 09:17:10.199563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:24:33.847 [2024-11-06 09:17:10.199579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:33.847 [2024-11-06 09:17:10.202144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:33.847 [2024-11-06 09:17:10.202195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:33.847 [2024-11-06 09:17:10.202263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:33.847 [2024-11-06 09:17:10.202323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:33.847 [2024-11-06 09:17:10.202409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:33.847 [2024-11-06 09:17:10.202431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:33.847 [2024-11-06 09:17:10.202539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:33.847 [2024-11-06 09:17:10.202642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:33.847 [2024-11-06 09:17:10.202657] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:24:33.847 [2024-11-06 09:17:10.202737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:33.847 pt2 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:33.847 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:33.848 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.848 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:33.848 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.848 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.848 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:33.848 "name": "raid_bdev1", 00:24:33.848 "uuid": "b231a403-8a99-43a6-a008-84ae39bfd232", 00:24:33.848 "strip_size_kb": 0, 00:24:33.848 "state": "online", 00:24:33.848 "raid_level": "raid1", 00:24:33.848 "superblock": true, 00:24:33.848 "num_base_bdevs": 2, 00:24:33.848 "num_base_bdevs_discovered": 1, 00:24:33.848 "num_base_bdevs_operational": 1, 00:24:33.848 "base_bdevs_list": [ 00:24:33.848 { 00:24:33.848 "name": null, 00:24:33.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.848 "is_configured": false, 00:24:33.848 "data_offset": 256, 00:24:33.848 "data_size": 7936 00:24:33.848 }, 00:24:33.848 { 00:24:33.848 "name": "pt2", 00:24:33.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:33.848 "is_configured": true, 00:24:33.848 "data_offset": 256, 00:24:33.848 "data_size": 7936 00:24:33.848 } 00:24:33.848 ] 00:24:33.848 }' 00:24:33.848 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:33.848 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:34.416 [2024-11-06 09:17:10.723570] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:34.416 [2024-11-06 09:17:10.723739] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:34.416 [2024-11-06 09:17:10.723862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:34.416 [2024-11-06 09:17:10.723934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:34.416 [2024-11-06 09:17:10.723950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:34.416 [2024-11-06 09:17:10.783613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:34.416 [2024-11-06 09:17:10.783847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:34.416 [2024-11-06 09:17:10.783906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:34.416 [2024-11-06 09:17:10.783924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:34.416 [2024-11-06 09:17:10.786534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:34.416 [2024-11-06 09:17:10.786580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:34.416 [2024-11-06 09:17:10.786667] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:34.416 [2024-11-06 09:17:10.786726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:34.416 [2024-11-06 09:17:10.787052] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:34.416 [2024-11-06 09:17:10.787229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:34.416 [2024-11-06 09:17:10.787299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:24:34.416 [2024-11-06 09:17:10.787380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:34.416 [2024-11-06 09:17:10.787486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:24:34.416 [2024-11-06 09:17:10.787502] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:34.416 [2024-11-06 09:17:10.787590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:34.416 [2024-11-06 09:17:10.787705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:24:34.416 [2024-11-06 09:17:10.787735] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:24:34.416 [2024-11-06 09:17:10.787895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.416 pt1 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:34.416 "name": "raid_bdev1", 00:24:34.416 "uuid": "b231a403-8a99-43a6-a008-84ae39bfd232", 00:24:34.416 "strip_size_kb": 0, 00:24:34.416 "state": "online", 00:24:34.416 "raid_level": "raid1", 00:24:34.416 "superblock": true, 00:24:34.416 "num_base_bdevs": 2, 00:24:34.416 "num_base_bdevs_discovered": 1, 00:24:34.416 "num_base_bdevs_operational": 1, 00:24:34.416 "base_bdevs_list": [ 00:24:34.416 { 00:24:34.416 "name": null, 00:24:34.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:34.416 "is_configured": false, 00:24:34.416 "data_offset": 256, 00:24:34.416 "data_size": 7936 00:24:34.416 }, 00:24:34.416 { 00:24:34.416 "name": "pt2", 00:24:34.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:34.416 "is_configured": true, 00:24:34.416 "data_offset": 256, 00:24:34.416 "data_size": 7936 00:24:34.416 } 00:24:34.416 ] 00:24:34.416 }' 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:34.416 09:17:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:24:34.984 [2024-11-06 09:17:11.356424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' b231a403-8a99-43a6-a008-84ae39bfd232 '!=' b231a403-8a99-43a6-a008-84ae39bfd232 ']' 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89078 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89078 ']' 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89078 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89078 00:24:34.984 killing process with pid 89078 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89078' 00:24:34.984 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 89078 00:24:34.985 [2024-11-06 09:17:11.430334] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:34.985 09:17:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 89078 00:24:34.985 [2024-11-06 09:17:11.430427] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:34.985 [2024-11-06 09:17:11.430488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:34.985 [2024-11-06 09:17:11.430509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:24:35.243 [2024-11-06 09:17:11.615446] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:36.178 09:17:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:24:36.178 00:24:36.178 real 0m6.586s 00:24:36.178 user 0m10.400s 00:24:36.178 sys 0m0.957s 00:24:36.178 ************************************ 00:24:36.178 END TEST raid_superblock_test_md_interleaved 00:24:36.178 ************************************ 00:24:36.178 09:17:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:36.178 09:17:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:36.437 09:17:12 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:24:36.437 09:17:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:24:36.437 09:17:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:36.437 09:17:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:36.437 ************************************ 00:24:36.437 START TEST raid_rebuild_test_sb_md_interleaved 00:24:36.437 ************************************ 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:24:36.437 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:24:36.438 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:24:36.438 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:24:36.438 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:24:36.438 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89402 00:24:36.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.438 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89402 00:24:36.438 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:36.438 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 89402 ']' 00:24:36.438 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.438 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:36.438 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.438 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:36.438 09:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:36.438 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:36.438 Zero copy mechanism will not be used. 00:24:36.438 [2024-11-06 09:17:12.844517] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:24:36.438 [2024-11-06 09:17:12.844692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89402 ] 00:24:36.696 [2024-11-06 09:17:13.034516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.696 [2024-11-06 09:17:13.194164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.954 [2024-11-06 09:17:13.415877] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:36.954 [2024-11-06 09:17:13.415995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:37.523 BaseBdev1_malloc 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:37.523 [2024-11-06 09:17:13.907262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:37.523 [2024-11-06 09:17:13.907496] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.523 [2024-11-06 09:17:13.907539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:24:37.523 [2024-11-06 09:17:13.907559] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.523 [2024-11-06 09:17:13.910446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.523 [2024-11-06 09:17:13.910631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:37.523 BaseBdev1 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:37.523 BaseBdev2_malloc 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:37.523 [2024-11-06 09:17:13.964708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:37.523 [2024-11-06 09:17:13.964795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.523 [2024-11-06 09:17:13.964848] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:24:37.523 [2024-11-06 09:17:13.964873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.523 [2024-11-06 09:17:13.967329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.523 [2024-11-06 09:17:13.967556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:37.523 BaseBdev2 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.523 09:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:37.523 spare_malloc 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:37.523 spare_delay 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:37.523 [2024-11-06 09:17:14.042499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:37.523 [2024-11-06 09:17:14.042576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:37.523 [2024-11-06 09:17:14.042620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:37.523 [2024-11-06 09:17:14.042641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:37.523 [2024-11-06 09:17:14.045132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:37.523 [2024-11-06 09:17:14.045182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:37.523 spare 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:37.523 [2024-11-06 09:17:14.050540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:37.523 [2024-11-06 09:17:14.053050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:37.523 [2024-11-06 09:17:14.053298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:37.523 [2024-11-06 09:17:14.053322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:37.523 [2024-11-06 09:17:14.053433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:37.523 [2024-11-06 09:17:14.053539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:37.523 [2024-11-06 09:17:14.053554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:37.523 [2024-11-06 09:17:14.053646] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:37.523 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.851 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:37.851 "name": "raid_bdev1", 00:24:37.851 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:37.851 "strip_size_kb": 0, 00:24:37.851 "state": "online", 00:24:37.851 "raid_level": "raid1", 00:24:37.851 "superblock": true, 00:24:37.851 "num_base_bdevs": 2, 00:24:37.851 "num_base_bdevs_discovered": 2, 00:24:37.851 "num_base_bdevs_operational": 2, 00:24:37.852 "base_bdevs_list": [ 00:24:37.852 { 00:24:37.852 "name": "BaseBdev1", 00:24:37.852 "uuid": "d3749f59-7604-5a3f-918d-893b792c0e03", 00:24:37.852 "is_configured": true, 00:24:37.852 "data_offset": 256, 00:24:37.852 "data_size": 7936 00:24:37.852 }, 00:24:37.852 { 00:24:37.852 "name": "BaseBdev2", 00:24:37.852 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:37.852 "is_configured": true, 00:24:37.852 "data_offset": 256, 00:24:37.852 "data_size": 7936 00:24:37.852 } 00:24:37.852 ] 00:24:37.852 }' 00:24:37.852 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:37.852 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:38.112 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:24:38.112 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.112 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:38.112 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:24:38.112 [2024-11-06 09:17:14.567066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:38.112 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.112 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:24:38.112 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.112 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.112 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:38.112 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:38.112 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:38.372 [2024-11-06 09:17:14.670711] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:38.372 "name": "raid_bdev1", 00:24:38.372 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:38.372 "strip_size_kb": 0, 00:24:38.372 "state": "online", 00:24:38.372 "raid_level": "raid1", 00:24:38.372 "superblock": true, 00:24:38.372 "num_base_bdevs": 2, 00:24:38.372 "num_base_bdevs_discovered": 1, 00:24:38.372 "num_base_bdevs_operational": 1, 00:24:38.372 "base_bdevs_list": [ 00:24:38.372 { 00:24:38.372 "name": null, 00:24:38.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.372 "is_configured": false, 00:24:38.372 "data_offset": 0, 00:24:38.372 "data_size": 7936 00:24:38.372 }, 00:24:38.372 { 00:24:38.372 "name": "BaseBdev2", 00:24:38.372 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:38.372 "is_configured": true, 00:24:38.372 "data_offset": 256, 00:24:38.372 "data_size": 7936 00:24:38.372 } 00:24:38.372 ] 00:24:38.372 }' 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:38.372 09:17:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:38.940 09:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:38.940 09:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.940 09:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:38.940 [2024-11-06 09:17:15.202895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:38.940 [2024-11-06 09:17:15.219800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:38.940 09:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.940 09:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:24:38.940 [2024-11-06 09:17:15.222322] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:39.876 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:39.876 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:39.876 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:39.876 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:39.876 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:39.876 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:39.876 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:39.876 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.876 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:39.876 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.876 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:39.876 "name": "raid_bdev1", 00:24:39.876 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:39.876 "strip_size_kb": 0, 00:24:39.876 "state": "online", 00:24:39.876 "raid_level": "raid1", 00:24:39.876 "superblock": true, 00:24:39.876 "num_base_bdevs": 2, 00:24:39.876 "num_base_bdevs_discovered": 2, 00:24:39.876 "num_base_bdevs_operational": 2, 00:24:39.876 "process": { 00:24:39.876 "type": "rebuild", 00:24:39.876 "target": "spare", 00:24:39.876 "progress": { 00:24:39.876 "blocks": 2560, 00:24:39.876 "percent": 32 00:24:39.876 } 00:24:39.876 }, 00:24:39.876 "base_bdevs_list": [ 00:24:39.876 { 00:24:39.876 "name": "spare", 00:24:39.876 "uuid": "1cbc09ff-8c94-51bc-abe0-d3811e320f7c", 00:24:39.876 "is_configured": true, 00:24:39.876 "data_offset": 256, 00:24:39.876 "data_size": 7936 00:24:39.876 }, 00:24:39.876 { 00:24:39.876 "name": "BaseBdev2", 00:24:39.876 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:39.876 "is_configured": true, 00:24:39.876 "data_offset": 256, 00:24:39.876 "data_size": 7936 00:24:39.876 } 00:24:39.876 ] 00:24:39.877 }' 00:24:39.877 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:39.877 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:39.877 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:39.877 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:39.877 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:39.877 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.877 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:39.877 [2024-11-06 09:17:16.379585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:40.136 [2024-11-06 09:17:16.431601] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:40.136 [2024-11-06 09:17:16.431878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:40.136 [2024-11-06 09:17:16.432013] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:40.136 [2024-11-06 09:17:16.432170] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:40.136 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.136 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:40.136 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:40.136 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:40.136 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:40.136 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:40.136 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:40.136 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:40.136 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:40.136 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:40.136 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:40.136 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.137 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.137 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.137 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:40.137 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.137 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:40.137 "name": "raid_bdev1", 00:24:40.137 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:40.137 "strip_size_kb": 0, 00:24:40.137 "state": "online", 00:24:40.137 "raid_level": "raid1", 00:24:40.137 "superblock": true, 00:24:40.137 "num_base_bdevs": 2, 00:24:40.137 "num_base_bdevs_discovered": 1, 00:24:40.137 "num_base_bdevs_operational": 1, 00:24:40.137 "base_bdevs_list": [ 00:24:40.137 { 00:24:40.137 "name": null, 00:24:40.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.137 "is_configured": false, 00:24:40.137 "data_offset": 0, 00:24:40.137 "data_size": 7936 00:24:40.137 }, 00:24:40.137 { 00:24:40.137 "name": "BaseBdev2", 00:24:40.137 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:40.137 "is_configured": true, 00:24:40.137 "data_offset": 256, 00:24:40.137 "data_size": 7936 00:24:40.137 } 00:24:40.137 ] 00:24:40.137 }' 00:24:40.137 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:40.137 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:40.711 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:40.711 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:40.711 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:40.711 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:40.711 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:40.711 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:40.711 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.711 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:40.711 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:40.711 09:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.711 09:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:40.711 "name": "raid_bdev1", 00:24:40.711 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:40.711 "strip_size_kb": 0, 00:24:40.711 "state": "online", 00:24:40.711 "raid_level": "raid1", 00:24:40.711 "superblock": true, 00:24:40.711 "num_base_bdevs": 2, 00:24:40.711 "num_base_bdevs_discovered": 1, 00:24:40.711 "num_base_bdevs_operational": 1, 00:24:40.711 "base_bdevs_list": [ 00:24:40.712 { 00:24:40.712 "name": null, 00:24:40.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:40.712 "is_configured": false, 00:24:40.712 "data_offset": 0, 00:24:40.712 "data_size": 7936 00:24:40.712 }, 00:24:40.712 { 00:24:40.712 "name": "BaseBdev2", 00:24:40.712 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:40.712 "is_configured": true, 00:24:40.712 "data_offset": 256, 00:24:40.712 "data_size": 7936 00:24:40.712 } 00:24:40.712 ] 00:24:40.712 }' 00:24:40.712 09:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:40.712 09:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:40.712 09:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:40.712 09:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:40.712 09:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:40.712 09:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.712 09:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:40.712 [2024-11-06 09:17:17.124999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:40.712 [2024-11-06 09:17:17.140757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:40.712 09:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.712 09:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:24:40.712 [2024-11-06 09:17:17.143304] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:41.653 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.653 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:41.653 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:41.653 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:41.653 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:41.653 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.653 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.653 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.653 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:41.653 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:41.912 "name": "raid_bdev1", 00:24:41.912 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:41.912 "strip_size_kb": 0, 00:24:41.912 "state": "online", 00:24:41.912 "raid_level": "raid1", 00:24:41.912 "superblock": true, 00:24:41.912 "num_base_bdevs": 2, 00:24:41.912 "num_base_bdevs_discovered": 2, 00:24:41.912 "num_base_bdevs_operational": 2, 00:24:41.912 "process": { 00:24:41.912 "type": "rebuild", 00:24:41.912 "target": "spare", 00:24:41.912 "progress": { 00:24:41.912 "blocks": 2560, 00:24:41.912 "percent": 32 00:24:41.912 } 00:24:41.912 }, 00:24:41.912 "base_bdevs_list": [ 00:24:41.912 { 00:24:41.912 "name": "spare", 00:24:41.912 "uuid": "1cbc09ff-8c94-51bc-abe0-d3811e320f7c", 00:24:41.912 "is_configured": true, 00:24:41.912 "data_offset": 256, 00:24:41.912 "data_size": 7936 00:24:41.912 }, 00:24:41.912 { 00:24:41.912 "name": "BaseBdev2", 00:24:41.912 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:41.912 "is_configured": true, 00:24:41.912 "data_offset": 256, 00:24:41.912 "data_size": 7936 00:24:41.912 } 00:24:41.912 ] 00:24:41.912 }' 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:24:41.912 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=798 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.912 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:41.912 "name": "raid_bdev1", 00:24:41.912 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:41.912 "strip_size_kb": 0, 00:24:41.912 "state": "online", 00:24:41.912 "raid_level": "raid1", 00:24:41.912 "superblock": true, 00:24:41.912 "num_base_bdevs": 2, 00:24:41.912 "num_base_bdevs_discovered": 2, 00:24:41.912 "num_base_bdevs_operational": 2, 00:24:41.912 "process": { 00:24:41.912 "type": "rebuild", 00:24:41.912 "target": "spare", 00:24:41.912 "progress": { 00:24:41.912 "blocks": 2816, 00:24:41.912 "percent": 35 00:24:41.912 } 00:24:41.912 }, 00:24:41.912 "base_bdevs_list": [ 00:24:41.912 { 00:24:41.912 "name": "spare", 00:24:41.912 "uuid": "1cbc09ff-8c94-51bc-abe0-d3811e320f7c", 00:24:41.912 "is_configured": true, 00:24:41.913 "data_offset": 256, 00:24:41.913 "data_size": 7936 00:24:41.913 }, 00:24:41.913 { 00:24:41.913 "name": "BaseBdev2", 00:24:41.913 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:41.913 "is_configured": true, 00:24:41.913 "data_offset": 256, 00:24:41.913 "data_size": 7936 00:24:41.913 } 00:24:41.913 ] 00:24:41.913 }' 00:24:41.913 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:41.913 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:41.913 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:42.171 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:42.171 09:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:43.106 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:43.106 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:43.106 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:43.106 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:43.106 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:43.106 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:43.106 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:43.107 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.107 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:43.107 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:43.107 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.107 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:43.107 "name": "raid_bdev1", 00:24:43.107 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:43.107 "strip_size_kb": 0, 00:24:43.107 "state": "online", 00:24:43.107 "raid_level": "raid1", 00:24:43.107 "superblock": true, 00:24:43.107 "num_base_bdevs": 2, 00:24:43.107 "num_base_bdevs_discovered": 2, 00:24:43.107 "num_base_bdevs_operational": 2, 00:24:43.107 "process": { 00:24:43.107 "type": "rebuild", 00:24:43.107 "target": "spare", 00:24:43.107 "progress": { 00:24:43.107 "blocks": 5888, 00:24:43.107 "percent": 74 00:24:43.107 } 00:24:43.107 }, 00:24:43.107 "base_bdevs_list": [ 00:24:43.107 { 00:24:43.107 "name": "spare", 00:24:43.107 "uuid": "1cbc09ff-8c94-51bc-abe0-d3811e320f7c", 00:24:43.107 "is_configured": true, 00:24:43.107 "data_offset": 256, 00:24:43.107 "data_size": 7936 00:24:43.107 }, 00:24:43.107 { 00:24:43.107 "name": "BaseBdev2", 00:24:43.107 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:43.107 "is_configured": true, 00:24:43.107 "data_offset": 256, 00:24:43.107 "data_size": 7936 00:24:43.107 } 00:24:43.107 ] 00:24:43.107 }' 00:24:43.107 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:43.107 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:43.107 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:43.107 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:43.107 09:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:24:44.038 [2024-11-06 09:17:20.265514] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:44.038 [2024-11-06 09:17:20.265600] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:44.038 [2024-11-06 09:17:20.265745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:44.383 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:24:44.383 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:44.383 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:44.383 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:44.383 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:44.383 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:44.383 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.383 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.383 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:44.384 "name": "raid_bdev1", 00:24:44.384 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:44.384 "strip_size_kb": 0, 00:24:44.384 "state": "online", 00:24:44.384 "raid_level": "raid1", 00:24:44.384 "superblock": true, 00:24:44.384 "num_base_bdevs": 2, 00:24:44.384 "num_base_bdevs_discovered": 2, 00:24:44.384 "num_base_bdevs_operational": 2, 00:24:44.384 "base_bdevs_list": [ 00:24:44.384 { 00:24:44.384 "name": "spare", 00:24:44.384 "uuid": "1cbc09ff-8c94-51bc-abe0-d3811e320f7c", 00:24:44.384 "is_configured": true, 00:24:44.384 "data_offset": 256, 00:24:44.384 "data_size": 7936 00:24:44.384 }, 00:24:44.384 { 00:24:44.384 "name": "BaseBdev2", 00:24:44.384 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:44.384 "is_configured": true, 00:24:44.384 "data_offset": 256, 00:24:44.384 "data_size": 7936 00:24:44.384 } 00:24:44.384 ] 00:24:44.384 }' 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:44.384 "name": "raid_bdev1", 00:24:44.384 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:44.384 "strip_size_kb": 0, 00:24:44.384 "state": "online", 00:24:44.384 "raid_level": "raid1", 00:24:44.384 "superblock": true, 00:24:44.384 "num_base_bdevs": 2, 00:24:44.384 "num_base_bdevs_discovered": 2, 00:24:44.384 "num_base_bdevs_operational": 2, 00:24:44.384 "base_bdevs_list": [ 00:24:44.384 { 00:24:44.384 "name": "spare", 00:24:44.384 "uuid": "1cbc09ff-8c94-51bc-abe0-d3811e320f7c", 00:24:44.384 "is_configured": true, 00:24:44.384 "data_offset": 256, 00:24:44.384 "data_size": 7936 00:24:44.384 }, 00:24:44.384 { 00:24:44.384 "name": "BaseBdev2", 00:24:44.384 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:44.384 "is_configured": true, 00:24:44.384 "data_offset": 256, 00:24:44.384 "data_size": 7936 00:24:44.384 } 00:24:44.384 ] 00:24:44.384 }' 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:44.384 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:44.643 "name": "raid_bdev1", 00:24:44.643 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:44.643 "strip_size_kb": 0, 00:24:44.643 "state": "online", 00:24:44.643 "raid_level": "raid1", 00:24:44.643 "superblock": true, 00:24:44.643 "num_base_bdevs": 2, 00:24:44.643 "num_base_bdevs_discovered": 2, 00:24:44.643 "num_base_bdevs_operational": 2, 00:24:44.643 "base_bdevs_list": [ 00:24:44.643 { 00:24:44.643 "name": "spare", 00:24:44.643 "uuid": "1cbc09ff-8c94-51bc-abe0-d3811e320f7c", 00:24:44.643 "is_configured": true, 00:24:44.643 "data_offset": 256, 00:24:44.643 "data_size": 7936 00:24:44.643 }, 00:24:44.643 { 00:24:44.643 "name": "BaseBdev2", 00:24:44.643 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:44.643 "is_configured": true, 00:24:44.643 "data_offset": 256, 00:24:44.643 "data_size": 7936 00:24:44.643 } 00:24:44.643 ] 00:24:44.643 }' 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:44.643 09:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:44.902 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:24:44.902 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.902 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:44.902 [2024-11-06 09:17:21.401277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:44.902 [2024-11-06 09:17:21.401319] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:44.902 [2024-11-06 09:17:21.401426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:44.902 [2024-11-06 09:17:21.401517] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:44.902 [2024-11-06 09:17:21.401537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:44.902 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.902 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:24:44.902 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:44.902 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.903 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:44.903 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:45.162 [2024-11-06 09:17:21.477261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:45.162 [2024-11-06 09:17:21.477324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.162 [2024-11-06 09:17:21.477357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:45.162 [2024-11-06 09:17:21.477373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.162 [2024-11-06 09:17:21.479918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.162 [2024-11-06 09:17:21.479962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:45.162 [2024-11-06 09:17:21.480039] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:45.162 [2024-11-06 09:17:21.480111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:45.162 [2024-11-06 09:17:21.480256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:45.162 spare 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:45.162 [2024-11-06 09:17:21.580373] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:24:45.162 [2024-11-06 09:17:21.580416] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:24:45.162 [2024-11-06 09:17:21.580557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:24:45.162 [2024-11-06 09:17:21.580678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:24:45.162 [2024-11-06 09:17:21.580696] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:24:45.162 [2024-11-06 09:17:21.580832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.162 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:45.162 "name": "raid_bdev1", 00:24:45.162 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:45.162 "strip_size_kb": 0, 00:24:45.162 "state": "online", 00:24:45.162 "raid_level": "raid1", 00:24:45.162 "superblock": true, 00:24:45.162 "num_base_bdevs": 2, 00:24:45.162 "num_base_bdevs_discovered": 2, 00:24:45.162 "num_base_bdevs_operational": 2, 00:24:45.162 "base_bdevs_list": [ 00:24:45.162 { 00:24:45.162 "name": "spare", 00:24:45.162 "uuid": "1cbc09ff-8c94-51bc-abe0-d3811e320f7c", 00:24:45.162 "is_configured": true, 00:24:45.162 "data_offset": 256, 00:24:45.162 "data_size": 7936 00:24:45.162 }, 00:24:45.163 { 00:24:45.163 "name": "BaseBdev2", 00:24:45.163 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:45.163 "is_configured": true, 00:24:45.163 "data_offset": 256, 00:24:45.163 "data_size": 7936 00:24:45.163 } 00:24:45.163 ] 00:24:45.163 }' 00:24:45.163 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:45.163 09:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:45.730 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:45.730 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:45.730 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:45.730 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:45.730 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:45.730 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.730 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:45.731 "name": "raid_bdev1", 00:24:45.731 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:45.731 "strip_size_kb": 0, 00:24:45.731 "state": "online", 00:24:45.731 "raid_level": "raid1", 00:24:45.731 "superblock": true, 00:24:45.731 "num_base_bdevs": 2, 00:24:45.731 "num_base_bdevs_discovered": 2, 00:24:45.731 "num_base_bdevs_operational": 2, 00:24:45.731 "base_bdevs_list": [ 00:24:45.731 { 00:24:45.731 "name": "spare", 00:24:45.731 "uuid": "1cbc09ff-8c94-51bc-abe0-d3811e320f7c", 00:24:45.731 "is_configured": true, 00:24:45.731 "data_offset": 256, 00:24:45.731 "data_size": 7936 00:24:45.731 }, 00:24:45.731 { 00:24:45.731 "name": "BaseBdev2", 00:24:45.731 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:45.731 "is_configured": true, 00:24:45.731 "data_offset": 256, 00:24:45.731 "data_size": 7936 00:24:45.731 } 00:24:45.731 ] 00:24:45.731 }' 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:45.731 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:45.989 [2024-11-06 09:17:22.281592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.989 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:45.989 "name": "raid_bdev1", 00:24:45.990 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:45.990 "strip_size_kb": 0, 00:24:45.990 "state": "online", 00:24:45.990 "raid_level": "raid1", 00:24:45.990 "superblock": true, 00:24:45.990 "num_base_bdevs": 2, 00:24:45.990 "num_base_bdevs_discovered": 1, 00:24:45.990 "num_base_bdevs_operational": 1, 00:24:45.990 "base_bdevs_list": [ 00:24:45.990 { 00:24:45.990 "name": null, 00:24:45.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.990 "is_configured": false, 00:24:45.990 "data_offset": 0, 00:24:45.990 "data_size": 7936 00:24:45.990 }, 00:24:45.990 { 00:24:45.990 "name": "BaseBdev2", 00:24:45.990 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:45.990 "is_configured": true, 00:24:45.990 "data_offset": 256, 00:24:45.990 "data_size": 7936 00:24:45.990 } 00:24:45.990 ] 00:24:45.990 }' 00:24:45.990 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:45.990 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:46.557 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:24:46.557 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.557 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:46.557 [2024-11-06 09:17:22.801785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:46.557 [2024-11-06 09:17:22.802047] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:46.557 [2024-11-06 09:17:22.802074] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:46.557 [2024-11-06 09:17:22.802122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:46.557 [2024-11-06 09:17:22.817562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:24:46.557 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.557 09:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:24:46.557 [2024-11-06 09:17:22.820109] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:47.492 "name": "raid_bdev1", 00:24:47.492 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:47.492 "strip_size_kb": 0, 00:24:47.492 "state": "online", 00:24:47.492 "raid_level": "raid1", 00:24:47.492 "superblock": true, 00:24:47.492 "num_base_bdevs": 2, 00:24:47.492 "num_base_bdevs_discovered": 2, 00:24:47.492 "num_base_bdevs_operational": 2, 00:24:47.492 "process": { 00:24:47.492 "type": "rebuild", 00:24:47.492 "target": "spare", 00:24:47.492 "progress": { 00:24:47.492 "blocks": 2560, 00:24:47.492 "percent": 32 00:24:47.492 } 00:24:47.492 }, 00:24:47.492 "base_bdevs_list": [ 00:24:47.492 { 00:24:47.492 "name": "spare", 00:24:47.492 "uuid": "1cbc09ff-8c94-51bc-abe0-d3811e320f7c", 00:24:47.492 "is_configured": true, 00:24:47.492 "data_offset": 256, 00:24:47.492 "data_size": 7936 00:24:47.492 }, 00:24:47.492 { 00:24:47.492 "name": "BaseBdev2", 00:24:47.492 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:47.492 "is_configured": true, 00:24:47.492 "data_offset": 256, 00:24:47.492 "data_size": 7936 00:24:47.492 } 00:24:47.492 ] 00:24:47.492 }' 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.492 09:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:47.492 [2024-11-06 09:17:23.981496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:47.751 [2024-11-06 09:17:24.028406] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:47.751 [2024-11-06 09:17:24.028485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:47.751 [2024-11-06 09:17:24.028508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:47.751 [2024-11-06 09:17:24.028523] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:47.751 "name": "raid_bdev1", 00:24:47.751 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:47.751 "strip_size_kb": 0, 00:24:47.751 "state": "online", 00:24:47.751 "raid_level": "raid1", 00:24:47.751 "superblock": true, 00:24:47.751 "num_base_bdevs": 2, 00:24:47.751 "num_base_bdevs_discovered": 1, 00:24:47.751 "num_base_bdevs_operational": 1, 00:24:47.751 "base_bdevs_list": [ 00:24:47.751 { 00:24:47.751 "name": null, 00:24:47.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.751 "is_configured": false, 00:24:47.751 "data_offset": 0, 00:24:47.751 "data_size": 7936 00:24:47.751 }, 00:24:47.751 { 00:24:47.751 "name": "BaseBdev2", 00:24:47.751 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:47.751 "is_configured": true, 00:24:47.751 "data_offset": 256, 00:24:47.751 "data_size": 7936 00:24:47.751 } 00:24:47.751 ] 00:24:47.751 }' 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:47.751 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:48.318 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:24:48.319 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.319 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:48.319 [2024-11-06 09:17:24.565394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:48.319 [2024-11-06 09:17:24.565616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.319 [2024-11-06 09:17:24.565694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:48.319 [2024-11-06 09:17:24.565836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.319 [2024-11-06 09:17:24.566133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.319 [2024-11-06 09:17:24.566291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:48.319 [2024-11-06 09:17:24.566500] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:24:48.319 [2024-11-06 09:17:24.566534] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:24:48.319 [2024-11-06 09:17:24.566548] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:24:48.319 [2024-11-06 09:17:24.566612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:48.319 [2024-11-06 09:17:24.582256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:48.319 spare 00:24:48.319 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.319 09:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:24:48.319 [2024-11-06 09:17:24.584849] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:49.257 "name": "raid_bdev1", 00:24:49.257 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:49.257 "strip_size_kb": 0, 00:24:49.257 "state": "online", 00:24:49.257 "raid_level": "raid1", 00:24:49.257 "superblock": true, 00:24:49.257 "num_base_bdevs": 2, 00:24:49.257 "num_base_bdevs_discovered": 2, 00:24:49.257 "num_base_bdevs_operational": 2, 00:24:49.257 "process": { 00:24:49.257 "type": "rebuild", 00:24:49.257 "target": "spare", 00:24:49.257 "progress": { 00:24:49.257 "blocks": 2560, 00:24:49.257 "percent": 32 00:24:49.257 } 00:24:49.257 }, 00:24:49.257 "base_bdevs_list": [ 00:24:49.257 { 00:24:49.257 "name": "spare", 00:24:49.257 "uuid": "1cbc09ff-8c94-51bc-abe0-d3811e320f7c", 00:24:49.257 "is_configured": true, 00:24:49.257 "data_offset": 256, 00:24:49.257 "data_size": 7936 00:24:49.257 }, 00:24:49.257 { 00:24:49.257 "name": "BaseBdev2", 00:24:49.257 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:49.257 "is_configured": true, 00:24:49.257 "data_offset": 256, 00:24:49.257 "data_size": 7936 00:24:49.257 } 00:24:49.257 ] 00:24:49.257 }' 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.257 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:49.257 [2024-11-06 09:17:25.746637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:49.516 [2024-11-06 09:17:25.793892] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:49.516 [2024-11-06 09:17:25.794204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:49.516 [2024-11-06 09:17:25.794344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:49.516 [2024-11-06 09:17:25.794490] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:49.516 "name": "raid_bdev1", 00:24:49.516 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:49.516 "strip_size_kb": 0, 00:24:49.516 "state": "online", 00:24:49.516 "raid_level": "raid1", 00:24:49.516 "superblock": true, 00:24:49.516 "num_base_bdevs": 2, 00:24:49.516 "num_base_bdevs_discovered": 1, 00:24:49.516 "num_base_bdevs_operational": 1, 00:24:49.516 "base_bdevs_list": [ 00:24:49.516 { 00:24:49.516 "name": null, 00:24:49.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.516 "is_configured": false, 00:24:49.516 "data_offset": 0, 00:24:49.516 "data_size": 7936 00:24:49.516 }, 00:24:49.516 { 00:24:49.516 "name": "BaseBdev2", 00:24:49.516 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:49.516 "is_configured": true, 00:24:49.516 "data_offset": 256, 00:24:49.516 "data_size": 7936 00:24:49.516 } 00:24:49.516 ] 00:24:49.516 }' 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:49.516 09:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:50.084 "name": "raid_bdev1", 00:24:50.084 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:50.084 "strip_size_kb": 0, 00:24:50.084 "state": "online", 00:24:50.084 "raid_level": "raid1", 00:24:50.084 "superblock": true, 00:24:50.084 "num_base_bdevs": 2, 00:24:50.084 "num_base_bdevs_discovered": 1, 00:24:50.084 "num_base_bdevs_operational": 1, 00:24:50.084 "base_bdevs_list": [ 00:24:50.084 { 00:24:50.084 "name": null, 00:24:50.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.084 "is_configured": false, 00:24:50.084 "data_offset": 0, 00:24:50.084 "data_size": 7936 00:24:50.084 }, 00:24:50.084 { 00:24:50.084 "name": "BaseBdev2", 00:24:50.084 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:50.084 "is_configured": true, 00:24:50.084 "data_offset": 256, 00:24:50.084 "data_size": 7936 00:24:50.084 } 00:24:50.084 ] 00:24:50.084 }' 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:24:50.084 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.085 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:50.085 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.085 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:50.085 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.085 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:50.085 [2024-11-06 09:17:26.531054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:50.085 [2024-11-06 09:17:26.531326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.085 [2024-11-06 09:17:26.531371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:50.085 [2024-11-06 09:17:26.531388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.085 [2024-11-06 09:17:26.531593] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.085 [2024-11-06 09:17:26.531615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:50.085 [2024-11-06 09:17:26.531687] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:50.085 [2024-11-06 09:17:26.531706] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:50.085 [2024-11-06 09:17:26.531720] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:50.085 [2024-11-06 09:17:26.531734] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:24:50.085 BaseBdev1 00:24:50.085 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.085 09:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.021 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:51.282 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.282 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:51.282 "name": "raid_bdev1", 00:24:51.282 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:51.282 "strip_size_kb": 0, 00:24:51.282 "state": "online", 00:24:51.282 "raid_level": "raid1", 00:24:51.282 "superblock": true, 00:24:51.282 "num_base_bdevs": 2, 00:24:51.282 "num_base_bdevs_discovered": 1, 00:24:51.282 "num_base_bdevs_operational": 1, 00:24:51.282 "base_bdevs_list": [ 00:24:51.282 { 00:24:51.283 "name": null, 00:24:51.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.283 "is_configured": false, 00:24:51.283 "data_offset": 0, 00:24:51.283 "data_size": 7936 00:24:51.283 }, 00:24:51.283 { 00:24:51.283 "name": "BaseBdev2", 00:24:51.283 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:51.283 "is_configured": true, 00:24:51.283 "data_offset": 256, 00:24:51.283 "data_size": 7936 00:24:51.283 } 00:24:51.283 ] 00:24:51.283 }' 00:24:51.283 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:51.283 09:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:51.864 "name": "raid_bdev1", 00:24:51.864 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:51.864 "strip_size_kb": 0, 00:24:51.864 "state": "online", 00:24:51.864 "raid_level": "raid1", 00:24:51.864 "superblock": true, 00:24:51.864 "num_base_bdevs": 2, 00:24:51.864 "num_base_bdevs_discovered": 1, 00:24:51.864 "num_base_bdevs_operational": 1, 00:24:51.864 "base_bdevs_list": [ 00:24:51.864 { 00:24:51.864 "name": null, 00:24:51.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.864 "is_configured": false, 00:24:51.864 "data_offset": 0, 00:24:51.864 "data_size": 7936 00:24:51.864 }, 00:24:51.864 { 00:24:51.864 "name": "BaseBdev2", 00:24:51.864 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:51.864 "is_configured": true, 00:24:51.864 "data_offset": 256, 00:24:51.864 "data_size": 7936 00:24:51.864 } 00:24:51.864 ] 00:24:51.864 }' 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:51.864 [2024-11-06 09:17:28.263768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:51.864 [2024-11-06 09:17:28.264101] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:24:51.864 [2024-11-06 09:17:28.264138] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:24:51.864 request: 00:24:51.864 { 00:24:51.864 "base_bdev": "BaseBdev1", 00:24:51.864 "raid_bdev": "raid_bdev1", 00:24:51.864 "method": "bdev_raid_add_base_bdev", 00:24:51.864 "req_id": 1 00:24:51.864 } 00:24:51.864 Got JSON-RPC error response 00:24:51.864 response: 00:24:51.864 { 00:24:51.864 "code": -22, 00:24:51.864 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:24:51.864 } 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:51.864 09:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:24:52.800 "name": "raid_bdev1", 00:24:52.800 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:52.800 "strip_size_kb": 0, 00:24:52.800 "state": "online", 00:24:52.800 "raid_level": "raid1", 00:24:52.800 "superblock": true, 00:24:52.800 "num_base_bdevs": 2, 00:24:52.800 "num_base_bdevs_discovered": 1, 00:24:52.800 "num_base_bdevs_operational": 1, 00:24:52.800 "base_bdevs_list": [ 00:24:52.800 { 00:24:52.800 "name": null, 00:24:52.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.800 "is_configured": false, 00:24:52.800 "data_offset": 0, 00:24:52.800 "data_size": 7936 00:24:52.800 }, 00:24:52.800 { 00:24:52.800 "name": "BaseBdev2", 00:24:52.800 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:52.800 "is_configured": true, 00:24:52.800 "data_offset": 256, 00:24:52.800 "data_size": 7936 00:24:52.800 } 00:24:52.800 ] 00:24:52.800 }' 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:24:52.800 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:53.367 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:53.367 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:24:53.367 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:24:53.367 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:24:53.367 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:24:53.367 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:24:53.367 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.367 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.367 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:53.367 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.367 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:24:53.367 "name": "raid_bdev1", 00:24:53.367 "uuid": "1d36c077-2471-42a0-a5a6-f5caf339796a", 00:24:53.367 "strip_size_kb": 0, 00:24:53.367 "state": "online", 00:24:53.367 "raid_level": "raid1", 00:24:53.367 "superblock": true, 00:24:53.367 "num_base_bdevs": 2, 00:24:53.367 "num_base_bdevs_discovered": 1, 00:24:53.367 "num_base_bdevs_operational": 1, 00:24:53.367 "base_bdevs_list": [ 00:24:53.367 { 00:24:53.367 "name": null, 00:24:53.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.367 "is_configured": false, 00:24:53.367 "data_offset": 0, 00:24:53.367 "data_size": 7936 00:24:53.367 }, 00:24:53.367 { 00:24:53.367 "name": "BaseBdev2", 00:24:53.367 "uuid": "57fc1551-d8ba-541b-b35b-638a08fd9198", 00:24:53.367 "is_configured": true, 00:24:53.367 "data_offset": 256, 00:24:53.367 "data_size": 7936 00:24:53.367 } 00:24:53.367 ] 00:24:53.367 }' 00:24:53.367 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:24:53.626 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:24:53.626 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:24:53.626 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:24:53.626 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89402 00:24:53.626 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 89402 ']' 00:24:53.626 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 89402 00:24:53.626 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:24:53.626 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:53.626 09:17:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89402 00:24:53.626 killing process with pid 89402 00:24:53.626 Received shutdown signal, test time was about 60.000000 seconds 00:24:53.626 00:24:53.626 Latency(us) 00:24:53.626 [2024-11-06T09:17:30.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.626 [2024-11-06T09:17:30.161Z] =================================================================================================================== 00:24:53.626 [2024-11-06T09:17:30.161Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:53.626 09:17:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:53.626 09:17:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:53.626 09:17:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89402' 00:24:53.626 09:17:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 89402 00:24:53.626 [2024-11-06 09:17:30.003098] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:53.626 09:17:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 89402 00:24:53.626 [2024-11-06 09:17:30.003245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:53.626 [2024-11-06 09:17:30.003316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:53.626 [2024-11-06 09:17:30.003334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:24:53.886 [2024-11-06 09:17:30.272537] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:54.822 ************************************ 00:24:54.822 END TEST raid_rebuild_test_sb_md_interleaved 00:24:54.822 ************************************ 00:24:54.822 09:17:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:24:54.822 00:24:54.822 real 0m18.581s 00:24:54.822 user 0m25.389s 00:24:54.822 sys 0m1.397s 00:24:54.822 09:17:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:54.822 09:17:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:24:55.094 09:17:31 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:24:55.094 09:17:31 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:24:55.094 09:17:31 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89402 ']' 00:24:55.094 09:17:31 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89402 00:24:55.094 09:17:31 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:24:55.094 00:24:55.094 real 13m0.962s 00:24:55.094 user 18m25.957s 00:24:55.094 sys 1m45.611s 00:24:55.094 ************************************ 00:24:55.094 END TEST bdev_raid 00:24:55.094 ************************************ 00:24:55.094 09:17:31 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:55.094 09:17:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:55.094 09:17:31 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:55.094 09:17:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:55.094 09:17:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:55.094 09:17:31 -- common/autotest_common.sh@10 -- # set +x 00:24:55.094 ************************************ 00:24:55.094 START TEST spdkcli_raid 00:24:55.094 ************************************ 00:24:55.094 09:17:31 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:55.094 * Looking for test storage... 00:24:55.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:55.094 09:17:31 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:55.094 09:17:31 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:24:55.094 09:17:31 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:55.094 09:17:31 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:55.094 09:17:31 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:24:55.094 09:17:31 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:55.094 09:17:31 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:55.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.094 --rc genhtml_branch_coverage=1 00:24:55.094 --rc genhtml_function_coverage=1 00:24:55.094 --rc genhtml_legend=1 00:24:55.094 --rc geninfo_all_blocks=1 00:24:55.094 --rc geninfo_unexecuted_blocks=1 00:24:55.094 00:24:55.094 ' 00:24:55.094 09:17:31 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:55.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.094 --rc genhtml_branch_coverage=1 00:24:55.094 --rc genhtml_function_coverage=1 00:24:55.094 --rc genhtml_legend=1 00:24:55.095 --rc geninfo_all_blocks=1 00:24:55.095 --rc geninfo_unexecuted_blocks=1 00:24:55.095 00:24:55.095 ' 00:24:55.095 09:17:31 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:55.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.095 --rc genhtml_branch_coverage=1 00:24:55.095 --rc genhtml_function_coverage=1 00:24:55.095 --rc genhtml_legend=1 00:24:55.095 --rc geninfo_all_blocks=1 00:24:55.095 --rc geninfo_unexecuted_blocks=1 00:24:55.095 00:24:55.095 ' 00:24:55.095 09:17:31 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:55.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.095 --rc genhtml_branch_coverage=1 00:24:55.095 --rc genhtml_function_coverage=1 00:24:55.095 --rc genhtml_legend=1 00:24:55.095 --rc geninfo_all_blocks=1 00:24:55.095 --rc geninfo_unexecuted_blocks=1 00:24:55.095 00:24:55.095 ' 00:24:55.095 09:17:31 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:55.095 09:17:31 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:55.095 09:17:31 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:55.095 09:17:31 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:24:55.095 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:24:55.095 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:24:55.095 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:24:55.095 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:24:55.095 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:24:55.095 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:24:55.095 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:24:55.095 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:24:55.095 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:24:55.095 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:24:55.380 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:24:55.380 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:24:55.380 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:24:55.380 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:24:55.380 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:24:55.380 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:24:55.380 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:24:55.380 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:24:55.380 09:17:31 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:24:55.380 09:17:31 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:55.380 09:17:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90090 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90090 00:24:55.380 09:17:31 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:24:55.380 09:17:31 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 90090 ']' 00:24:55.380 09:17:31 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.380 09:17:31 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:55.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.380 09:17:31 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.380 09:17:31 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:55.380 09:17:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:55.380 [2024-11-06 09:17:31.766890] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:24:55.380 [2024-11-06 09:17:31.767973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90090 ] 00:24:55.639 [2024-11-06 09:17:31.954042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:55.639 [2024-11-06 09:17:32.089535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.639 [2024-11-06 09:17:32.089556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.574 09:17:32 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:56.574 09:17:32 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:24:56.574 09:17:32 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:24:56.574 09:17:32 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:56.574 09:17:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:56.574 09:17:32 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:24:56.574 09:17:32 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.574 09:17:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:56.574 09:17:32 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:56.574 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:56.574 ' 00:24:58.476 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:24:58.476 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:24:58.476 09:17:34 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:24:58.476 09:17:34 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.476 09:17:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:58.476 09:17:34 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:24:58.476 09:17:34 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:58.476 09:17:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:58.476 09:17:34 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:24:58.476 ' 00:24:59.409 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:24:59.668 09:17:35 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:24:59.668 09:17:35 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:59.668 09:17:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:59.668 09:17:36 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:24:59.668 09:17:36 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:59.668 09:17:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:24:59.668 09:17:36 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:24:59.668 09:17:36 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:25:00.234 09:17:36 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:25:00.234 09:17:36 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:25:00.234 09:17:36 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:25:00.234 09:17:36 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:00.234 09:17:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:00.234 09:17:36 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:25:00.234 09:17:36 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:00.234 09:17:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:00.234 09:17:36 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:25:00.234 ' 00:25:01.171 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:25:01.429 09:17:37 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:25:01.429 09:17:37 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:01.429 09:17:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 09:17:37 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:25:01.429 09:17:37 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:01.429 09:17:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:01.429 09:17:37 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:25:01.429 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:25:01.429 ' 00:25:02.852 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:25:02.852 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:25:02.852 09:17:39 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:25:02.852 09:17:39 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:02.852 09:17:39 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:03.110 09:17:39 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90090 00:25:03.110 09:17:39 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90090 ']' 00:25:03.110 09:17:39 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90090 00:25:03.110 09:17:39 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:25:03.110 09:17:39 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:03.110 09:17:39 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90090 00:25:03.110 09:17:39 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:03.110 09:17:39 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:03.110 09:17:39 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90090' 00:25:03.110 killing process with pid 90090 00:25:03.110 09:17:39 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 90090 00:25:03.110 09:17:39 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 90090 00:25:05.640 09:17:41 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:25:05.640 09:17:41 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90090 ']' 00:25:05.640 Process with pid 90090 is not found 00:25:05.640 09:17:41 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90090 00:25:05.640 09:17:41 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 90090 ']' 00:25:05.640 09:17:41 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 90090 00:25:05.640 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (90090) - No such process 00:25:05.640 09:17:41 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 90090 is not found' 00:25:05.640 09:17:41 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:25:05.640 09:17:41 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:05.640 09:17:41 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:05.640 09:17:41 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:05.640 00:25:05.640 real 0m10.237s 00:25:05.640 user 0m21.225s 00:25:05.640 sys 0m1.139s 00:25:05.640 09:17:41 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:05.640 09:17:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:25:05.640 ************************************ 00:25:05.640 END TEST spdkcli_raid 00:25:05.640 ************************************ 00:25:05.640 09:17:41 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:25:05.640 09:17:41 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:05.640 09:17:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:05.640 09:17:41 -- common/autotest_common.sh@10 -- # set +x 00:25:05.640 ************************************ 00:25:05.640 START TEST blockdev_raid5f 00:25:05.640 ************************************ 00:25:05.640 09:17:41 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:25:05.640 * Looking for test storage... 00:25:05.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:25:05.640 09:17:41 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:05.640 09:17:41 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:25:05.640 09:17:41 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:05.640 09:17:41 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:05.640 09:17:41 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:25:05.640 09:17:41 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:05.641 09:17:41 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:05.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.641 --rc genhtml_branch_coverage=1 00:25:05.641 --rc genhtml_function_coverage=1 00:25:05.641 --rc genhtml_legend=1 00:25:05.641 --rc geninfo_all_blocks=1 00:25:05.641 --rc geninfo_unexecuted_blocks=1 00:25:05.641 00:25:05.641 ' 00:25:05.641 09:17:41 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:05.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.641 --rc genhtml_branch_coverage=1 00:25:05.641 --rc genhtml_function_coverage=1 00:25:05.641 --rc genhtml_legend=1 00:25:05.641 --rc geninfo_all_blocks=1 00:25:05.641 --rc geninfo_unexecuted_blocks=1 00:25:05.641 00:25:05.641 ' 00:25:05.641 09:17:41 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:05.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.641 --rc genhtml_branch_coverage=1 00:25:05.641 --rc genhtml_function_coverage=1 00:25:05.641 --rc genhtml_legend=1 00:25:05.641 --rc geninfo_all_blocks=1 00:25:05.641 --rc geninfo_unexecuted_blocks=1 00:25:05.641 00:25:05.641 ' 00:25:05.641 09:17:41 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:05.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:05.641 --rc genhtml_branch_coverage=1 00:25:05.641 --rc genhtml_function_coverage=1 00:25:05.641 --rc genhtml_legend=1 00:25:05.641 --rc geninfo_all_blocks=1 00:25:05.641 --rc geninfo_unexecuted_blocks=1 00:25:05.641 00:25:05.641 ' 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90366 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90366 00:25:05.641 09:17:41 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 90366 ']' 00:25:05.641 09:17:41 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.641 09:17:41 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:25:05.641 09:17:41 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:05.641 09:17:41 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.641 09:17:41 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:05.641 09:17:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:05.641 [2024-11-06 09:17:42.051122] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:25:05.641 [2024-11-06 09:17:42.051510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90366 ] 00:25:05.899 [2024-11-06 09:17:42.239415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.899 [2024-11-06 09:17:42.384580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.846 09:17:43 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:06.846 09:17:43 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:25:06.846 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:25:06.846 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:25:06.846 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:25:06.846 09:17:43 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.846 09:17:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:06.846 Malloc0 00:25:06.846 Malloc1 00:25:07.119 Malloc2 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.119 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.119 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:25:07.119 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.119 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.119 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.119 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:25:07.119 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:07.119 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:25:07.119 09:17:43 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.119 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:25:07.119 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:25:07.120 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3d6977d8-ad9a-4988-9180-8d04510d342d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3d6977d8-ad9a-4988-9180-8d04510d342d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3d6977d8-ad9a-4988-9180-8d04510d342d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4cf577e6-f508-4205-935d-70724335ccfc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ecdaad68-8445-40f6-a28d-16905c915ec1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "45d4290c-b00a-44a3-95bd-7ff90a178ee5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:25:07.120 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:25:07.120 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:25:07.120 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:25:07.120 09:17:43 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90366 00:25:07.120 09:17:43 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 90366 ']' 00:25:07.120 09:17:43 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 90366 00:25:07.120 09:17:43 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:25:07.120 09:17:43 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:07.120 09:17:43 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90366 00:25:07.120 killing process with pid 90366 00:25:07.120 09:17:43 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:07.120 09:17:43 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:07.120 09:17:43 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90366' 00:25:07.120 09:17:43 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 90366 00:25:07.120 09:17:43 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 90366 00:25:09.652 09:17:46 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:09.652 09:17:46 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:25:09.652 09:17:46 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:25:09.652 09:17:46 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:09.652 09:17:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:09.652 ************************************ 00:25:09.652 START TEST bdev_hello_world 00:25:09.652 ************************************ 00:25:09.652 09:17:46 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:25:09.913 [2024-11-06 09:17:46.242051] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:25:09.913 [2024-11-06 09:17:46.242204] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90433 ] 00:25:09.913 [2024-11-06 09:17:46.415846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.170 [2024-11-06 09:17:46.546023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.737 [2024-11-06 09:17:47.073025] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:25:10.737 [2024-11-06 09:17:47.073098] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:25:10.737 [2024-11-06 09:17:47.073138] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:25:10.737 [2024-11-06 09:17:47.073703] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:25:10.737 [2024-11-06 09:17:47.073907] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:25:10.737 [2024-11-06 09:17:47.073935] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:25:10.737 [2024-11-06 09:17:47.074004] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:25:10.737 00:25:10.737 [2024-11-06 09:17:47.074034] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:25:12.112 ************************************ 00:25:12.112 END TEST bdev_hello_world 00:25:12.112 ************************************ 00:25:12.112 00:25:12.112 real 0m2.199s 00:25:12.112 user 0m1.788s 00:25:12.112 sys 0m0.284s 00:25:12.112 09:17:48 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:12.112 09:17:48 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:25:12.112 09:17:48 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:25:12.112 09:17:48 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:12.112 09:17:48 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:12.112 09:17:48 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:12.112 ************************************ 00:25:12.112 START TEST bdev_bounds 00:25:12.112 ************************************ 00:25:12.112 09:17:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:25:12.112 09:17:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90475 00:25:12.112 09:17:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:12.112 09:17:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:25:12.112 Process bdevio pid: 90475 00:25:12.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.112 09:17:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90475' 00:25:12.112 09:17:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90475 00:25:12.112 09:17:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 90475 ']' 00:25:12.112 09:17:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.112 09:17:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:12.112 09:17:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.112 09:17:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:12.112 09:17:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:12.112 [2024-11-06 09:17:48.511418] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:25:12.112 [2024-11-06 09:17:48.511809] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90475 ] 00:25:12.371 [2024-11-06 09:17:48.695755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:12.371 [2024-11-06 09:17:48.830992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.371 [2024-11-06 09:17:48.831130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.371 [2024-11-06 09:17:48.831142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:12.938 09:17:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:12.938 09:17:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:25:12.938 09:17:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:25:13.196 I/O targets: 00:25:13.196 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:25:13.196 00:25:13.196 00:25:13.196 CUnit - A unit testing framework for C - Version 2.1-3 00:25:13.196 http://cunit.sourceforge.net/ 00:25:13.196 00:25:13.196 00:25:13.196 Suite: bdevio tests on: raid5f 00:25:13.196 Test: blockdev write read block ...passed 00:25:13.196 Test: blockdev write zeroes read block ...passed 00:25:13.196 Test: blockdev write zeroes read no split ...passed 00:25:13.196 Test: blockdev write zeroes read split ...passed 00:25:13.456 Test: blockdev write zeroes read split partial ...passed 00:25:13.456 Test: blockdev reset ...passed 00:25:13.456 Test: blockdev write read 8 blocks ...passed 00:25:13.456 Test: blockdev write read size > 128k ...passed 00:25:13.456 Test: blockdev write read invalid size ...passed 00:25:13.456 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:13.456 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:13.456 Test: blockdev write read max offset ...passed 00:25:13.456 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:13.456 Test: blockdev writev readv 8 blocks ...passed 00:25:13.456 Test: blockdev writev readv 30 x 1block ...passed 00:25:13.456 Test: blockdev writev readv block ...passed 00:25:13.456 Test: blockdev writev readv size > 128k ...passed 00:25:13.456 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:13.456 Test: blockdev comparev and writev ...passed 00:25:13.456 Test: blockdev nvme passthru rw ...passed 00:25:13.456 Test: blockdev nvme passthru vendor specific ...passed 00:25:13.456 Test: blockdev nvme admin passthru ...passed 00:25:13.456 Test: blockdev copy ...passed 00:25:13.456 00:25:13.456 Run Summary: Type Total Ran Passed Failed Inactive 00:25:13.456 suites 1 1 n/a 0 0 00:25:13.456 tests 23 23 23 0 0 00:25:13.456 asserts 130 130 130 0 n/a 00:25:13.456 00:25:13.456 Elapsed time = 0.579 seconds 00:25:13.456 0 00:25:13.456 09:17:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90475 00:25:13.456 09:17:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 90475 ']' 00:25:13.456 09:17:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 90475 00:25:13.456 09:17:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:25:13.456 09:17:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:13.456 09:17:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90475 00:25:13.456 killing process with pid 90475 00:25:13.456 09:17:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:13.456 09:17:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:13.456 09:17:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90475' 00:25:13.456 09:17:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 90475 00:25:13.456 09:17:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 90475 00:25:14.831 ************************************ 00:25:14.831 END TEST bdev_bounds 00:25:14.831 ************************************ 00:25:14.831 09:17:51 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:25:14.831 00:25:14.831 real 0m2.778s 00:25:14.831 user 0m6.902s 00:25:14.831 sys 0m0.408s 00:25:14.831 09:17:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:14.831 09:17:51 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:25:14.831 09:17:51 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:25:14.831 09:17:51 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:25:14.831 09:17:51 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:14.831 09:17:51 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:14.831 ************************************ 00:25:14.831 START TEST bdev_nbd 00:25:14.831 ************************************ 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90536 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90536 /var/tmp/spdk-nbd.sock 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 90536 ']' 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:14.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:14.831 09:17:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:14.831 [2024-11-06 09:17:51.346072] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:25:14.831 [2024-11-06 09:17:51.346227] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.090 [2024-11-06 09:17:51.519131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.348 [2024-11-06 09:17:51.650046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:25:15.917 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:16.175 1+0 records in 00:25:16.175 1+0 records out 00:25:16.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271277 s, 15.1 MB/s 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:25:16.175 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:16.434 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:25:16.434 { 00:25:16.434 "nbd_device": "/dev/nbd0", 00:25:16.434 "bdev_name": "raid5f" 00:25:16.434 } 00:25:16.434 ]' 00:25:16.434 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:25:16.434 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:25:16.434 { 00:25:16.435 "nbd_device": "/dev/nbd0", 00:25:16.435 "bdev_name": "raid5f" 00:25:16.435 } 00:25:16.435 ]' 00:25:16.435 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:25:16.435 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:16.435 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:16.435 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:16.435 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:16.435 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:16.435 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:16.435 09:17:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:17.001 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:17.001 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:17.001 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:17.002 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:17.002 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:17.002 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:17.002 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:17.002 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:17.002 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:17.002 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:17.002 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:17.260 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:25:17.518 /dev/nbd0 00:25:17.518 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:17.518 09:17:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:17.518 09:17:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:25:17.518 09:17:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:25:17.518 09:17:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:25:17.519 09:17:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:25:17.519 09:17:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:25:17.519 09:17:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:25:17.519 09:17:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:25:17.519 09:17:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:25:17.519 09:17:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:17.519 1+0 records in 00:25:17.519 1+0 records out 00:25:17.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329123 s, 12.4 MB/s 00:25:17.519 09:17:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.519 09:17:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:25:17.519 09:17:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:17.519 09:17:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:25:17.519 09:17:54 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:25:17.519 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:17.519 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:17.519 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:17.519 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:17.519 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:18.105 { 00:25:18.105 "nbd_device": "/dev/nbd0", 00:25:18.105 "bdev_name": "raid5f" 00:25:18.105 } 00:25:18.105 ]' 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:18.105 { 00:25:18.105 "nbd_device": "/dev/nbd0", 00:25:18.105 "bdev_name": "raid5f" 00:25:18.105 } 00:25:18.105 ]' 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:25:18.105 256+0 records in 00:25:18.105 256+0 records out 00:25:18.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00883514 s, 119 MB/s 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:25:18.105 256+0 records in 00:25:18.105 256+0 records out 00:25:18.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0400953 s, 26.2 MB/s 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:18.105 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:18.365 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:18.365 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:18.365 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:18.365 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:18.365 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:18.365 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:18.365 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:18.365 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:18.365 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:25:18.365 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:18.365 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:25:18.624 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:18.625 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:18.625 09:17:54 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:25:18.625 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:25:18.883 malloc_lvol_verify 00:25:18.883 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:25:19.142 3cfe36b2-7cef-4323-90fe-5817936a93b2 00:25:19.142 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:25:19.401 c4ea3e44-476a-4802-b501-80b916e05a62 00:25:19.401 09:17:55 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:25:19.661 /dev/nbd0 00:25:19.661 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:25:19.661 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:25:19.661 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:25:19.661 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:25:19.661 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:25:19.661 mke2fs 1.47.0 (5-Feb-2023) 00:25:19.661 Discarding device blocks: 0/4096 done 00:25:19.661 Creating filesystem with 4096 1k blocks and 1024 inodes 00:25:19.661 00:25:19.661 Allocating group tables: 0/1 done 00:25:19.661 Writing inode tables: 0/1 done 00:25:19.661 Creating journal (1024 blocks): done 00:25:19.661 Writing superblocks and filesystem accounting information: 0/1 done 00:25:19.661 00:25:19.661 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:25:19.661 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:25:19.661 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:19.661 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:19.661 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:25:19.661 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:19.661 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90536 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 90536 ']' 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 90536 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90536 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:20.229 killing process with pid 90536 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90536' 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 90536 00:25:20.229 09:17:56 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 90536 00:25:21.605 09:17:57 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:25:21.605 00:25:21.605 real 0m6.686s 00:25:21.605 user 0m9.675s 00:25:21.605 sys 0m1.414s 00:25:21.605 09:17:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:21.605 09:17:57 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:25:21.605 ************************************ 00:25:21.605 END TEST bdev_nbd 00:25:21.605 ************************************ 00:25:21.605 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:25:21.605 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:25:21.605 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:25:21.605 09:17:57 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:25:21.605 09:17:57 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:21.605 09:17:57 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:21.605 09:17:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:21.605 ************************************ 00:25:21.605 START TEST bdev_fio 00:25:21.605 ************************************ 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:25:21.605 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:25:21.605 09:17:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:25:21.605 ************************************ 00:25:21.605 START TEST bdev_fio_rw_verify 00:25:21.605 ************************************ 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:21.605 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:21.606 09:17:58 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:25:21.864 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:25:21.864 fio-3.35 00:25:21.864 Starting 1 thread 00:25:34.069 00:25:34.069 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90741: Wed Nov 6 09:18:09 2024 00:25:34.069 read: IOPS=8603, BW=33.6MiB/s (35.2MB/s)(336MiB/10001msec) 00:25:34.069 slat (usec): min=25, max=189, avg=29.01, stdev= 3.26 00:25:34.069 clat (usec): min=14, max=461, avg=185.91, stdev=68.70 00:25:34.069 lat (usec): min=42, max=499, avg=214.92, stdev=69.12 00:25:34.069 clat percentiles (usec): 00:25:34.069 | 50.000th=[ 192], 99.000th=[ 310], 99.900th=[ 367], 99.990th=[ 400], 00:25:34.069 | 99.999th=[ 461] 00:25:34.069 write: IOPS=9036, BW=35.3MiB/s (37.0MB/s)(349MiB/9873msec); 0 zone resets 00:25:34.069 slat (usec): min=12, max=415, avg=23.05, stdev= 4.72 00:25:34.069 clat (usec): min=79, max=1337, avg=423.96, stdev=52.23 00:25:34.069 lat (usec): min=100, max=1504, avg=447.01, stdev=53.40 00:25:34.069 clat percentiles (usec): 00:25:34.069 | 50.000th=[ 433], 99.000th=[ 537], 99.900th=[ 635], 99.990th=[ 1090], 00:25:34.069 | 99.999th=[ 1336] 00:25:34.069 bw ( KiB/s): min=33760, max=38152, per=98.56%, avg=35627.37, stdev=1641.98, samples=19 00:25:34.069 iops : min= 8440, max= 9538, avg=8906.84, stdev=410.49, samples=19 00:25:34.069 lat (usec) : 20=0.01%, 100=6.02%, 250=31.39%, 500=61.02%, 750=1.55% 00:25:34.069 lat (usec) : 1000=0.01% 00:25:34.069 lat (msec) : 2=0.01% 00:25:34.069 cpu : usr=98.72%, sys=0.45%, ctx=73, majf=0, minf=7464 00:25:34.069 IO depths : 1=7.7%, 2=19.8%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:34.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.069 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:34.069 issued rwts: total=86045,89219,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:34.069 latency : target=0, window=0, percentile=100.00%, depth=8 00:25:34.069 00:25:34.069 Run status group 0 (all jobs): 00:25:34.069 READ: bw=33.6MiB/s (35.2MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.2MB/s), io=336MiB (352MB), run=10001-10001msec 00:25:34.069 WRITE: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=349MiB (365MB), run=9873-9873msec 00:25:34.327 ----------------------------------------------------- 00:25:34.327 Suppressions used: 00:25:34.327 count bytes template 00:25:34.327 1 7 /usr/src/fio/parse.c 00:25:34.327 503 48288 /usr/src/fio/iolog.c 00:25:34.327 1 8 libtcmalloc_minimal.so 00:25:34.327 1 904 libcrypto.so 00:25:34.327 ----------------------------------------------------- 00:25:34.327 00:25:34.586 00:25:34.586 real 0m12.793s 00:25:34.586 user 0m13.097s 00:25:34.586 sys 0m0.840s 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:34.586 ************************************ 00:25:34.586 END TEST bdev_fio_rw_verify 00:25:34.586 ************************************ 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3d6977d8-ad9a-4988-9180-8d04510d342d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3d6977d8-ad9a-4988-9180-8d04510d342d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3d6977d8-ad9a-4988-9180-8d04510d342d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4cf577e6-f508-4205-935d-70724335ccfc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ecdaad68-8445-40f6-a28d-16905c915ec1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "45d4290c-b00a-44a3-95bd-7ff90a178ee5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:25:34.586 /home/vagrant/spdk_repo/spdk 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:25:34.586 00:25:34.586 real 0m13.004s 00:25:34.586 user 0m13.206s 00:25:34.586 sys 0m0.922s 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:34.586 09:18:10 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:25:34.586 ************************************ 00:25:34.586 END TEST bdev_fio 00:25:34.586 ************************************ 00:25:34.586 09:18:11 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:34.586 09:18:11 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:34.586 09:18:11 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:25:34.586 09:18:11 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:34.587 09:18:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:34.587 ************************************ 00:25:34.587 START TEST bdev_verify 00:25:34.587 ************************************ 00:25:34.587 09:18:11 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:25:34.845 [2024-11-06 09:18:11.139868] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:25:34.845 [2024-11-06 09:18:11.140056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90906 ] 00:25:34.845 [2024-11-06 09:18:11.326222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:35.116 [2024-11-06 09:18:11.459239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.116 [2024-11-06 09:18:11.459243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.695 Running I/O for 5 seconds... 00:25:37.559 11296.00 IOPS, 44.12 MiB/s [2024-11-06T09:18:15.027Z] 12650.00 IOPS, 49.41 MiB/s [2024-11-06T09:18:16.031Z] 13179.33 IOPS, 51.48 MiB/s [2024-11-06T09:18:17.405Z] 13105.00 IOPS, 51.19 MiB/s [2024-11-06T09:18:17.405Z] 12906.80 IOPS, 50.42 MiB/s 00:25:40.870 Latency(us) 00:25:40.871 [2024-11-06T09:18:17.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.871 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:40.871 Verification LBA range: start 0x0 length 0x2000 00:25:40.871 raid5f : 5.02 6443.37 25.17 0.00 0.00 29870.19 273.69 26691.03 00:25:40.871 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:40.871 Verification LBA range: start 0x2000 length 0x2000 00:25:40.871 raid5f : 5.01 6449.48 25.19 0.00 0.00 29810.15 253.21 26810.18 00:25:40.871 [2024-11-06T09:18:17.406Z] =================================================================================================================== 00:25:40.871 [2024-11-06T09:18:17.406Z] Total : 12892.85 50.36 0.00 0.00 29840.17 253.21 26810.18 00:25:41.804 00:25:41.804 real 0m7.291s 00:25:41.804 user 0m13.371s 00:25:41.804 sys 0m0.299s 00:25:41.804 09:18:18 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:41.804 09:18:18 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:25:41.804 ************************************ 00:25:41.804 END TEST bdev_verify 00:25:41.804 ************************************ 00:25:42.063 09:18:18 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:42.063 09:18:18 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:25:42.063 09:18:18 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:42.063 09:18:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:42.063 ************************************ 00:25:42.063 START TEST bdev_verify_big_io 00:25:42.063 ************************************ 00:25:42.063 09:18:18 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:25:42.063 [2024-11-06 09:18:18.482399] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:25:42.063 [2024-11-06 09:18:18.482580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91001 ] 00:25:42.321 [2024-11-06 09:18:18.671998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:42.321 [2024-11-06 09:18:18.829371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.321 [2024-11-06 09:18:18.829377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.888 Running I/O for 5 seconds... 00:25:45.196 506.00 IOPS, 31.62 MiB/s [2024-11-06T09:18:22.666Z] 634.00 IOPS, 39.62 MiB/s [2024-11-06T09:18:23.621Z] 676.67 IOPS, 42.29 MiB/s [2024-11-06T09:18:24.556Z] 698.00 IOPS, 43.62 MiB/s [2024-11-06T09:18:24.815Z] 710.40 IOPS, 44.40 MiB/s 00:25:48.280 Latency(us) 00:25:48.280 [2024-11-06T09:18:24.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.280 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:25:48.280 Verification LBA range: start 0x0 length 0x200 00:25:48.280 raid5f : 5.28 360.96 22.56 0.00 0.00 8749864.54 178.73 366048.35 00:25:48.280 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:25:48.280 Verification LBA range: start 0x200 length 0x200 00:25:48.280 raid5f : 5.26 361.87 22.62 0.00 0.00 8709693.15 279.27 364141.85 00:25:48.280 [2024-11-06T09:18:24.815Z] =================================================================================================================== 00:25:48.280 [2024-11-06T09:18:24.815Z] Total : 722.83 45.18 0.00 0.00 8729778.84 178.73 366048.35 00:25:49.656 00:25:49.656 real 0m7.610s 00:25:49.656 user 0m13.971s 00:25:49.656 sys 0m0.313s 00:25:49.656 09:18:25 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:49.656 09:18:25 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:25:49.656 ************************************ 00:25:49.656 END TEST bdev_verify_big_io 00:25:49.656 ************************************ 00:25:49.656 09:18:26 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:49.656 09:18:26 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:25:49.656 09:18:26 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:49.656 09:18:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:49.656 ************************************ 00:25:49.656 START TEST bdev_write_zeroes 00:25:49.656 ************************************ 00:25:49.656 09:18:26 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:49.656 [2024-11-06 09:18:26.144447] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:25:49.656 [2024-11-06 09:18:26.144630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91101 ] 00:25:49.914 [2024-11-06 09:18:26.322387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.172 [2024-11-06 09:18:26.451401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.738 Running I/O for 1 seconds... 00:25:51.672 19719.00 IOPS, 77.03 MiB/s 00:25:51.672 Latency(us) 00:25:51.672 [2024-11-06T09:18:28.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.673 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:25:51.673 raid5f : 1.01 19701.61 76.96 0.00 0.00 6471.30 1966.08 8579.26 00:25:51.673 [2024-11-06T09:18:28.208Z] =================================================================================================================== 00:25:51.673 [2024-11-06T09:18:28.208Z] Total : 19701.61 76.96 0.00 0.00 6471.30 1966.08 8579.26 00:25:53.047 00:25:53.047 real 0m3.268s 00:25:53.047 user 0m2.839s 00:25:53.047 sys 0m0.296s 00:25:53.047 09:18:29 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:53.047 ************************************ 00:25:53.047 09:18:29 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:25:53.047 END TEST bdev_write_zeroes 00:25:53.047 ************************************ 00:25:53.047 09:18:29 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:53.047 09:18:29 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:25:53.047 09:18:29 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:53.047 09:18:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:53.047 ************************************ 00:25:53.047 START TEST bdev_json_nonenclosed 00:25:53.047 ************************************ 00:25:53.047 09:18:29 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:53.047 [2024-11-06 09:18:29.454934] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:25:53.047 [2024-11-06 09:18:29.455119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91154 ] 00:25:53.305 [2024-11-06 09:18:29.640783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.305 [2024-11-06 09:18:29.814384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.306 [2024-11-06 09:18:29.814499] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:53.306 [2024-11-06 09:18:29.814541] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:53.306 [2024-11-06 09:18:29.814573] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:53.564 00:25:53.564 real 0m0.722s 00:25:53.564 user 0m0.478s 00:25:53.564 sys 0m0.139s 00:25:53.564 09:18:30 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:53.564 09:18:30 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:25:53.564 ************************************ 00:25:53.564 END TEST bdev_json_nonenclosed 00:25:53.564 ************************************ 00:25:53.823 09:18:30 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:53.823 09:18:30 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:25:53.823 09:18:30 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:53.823 09:18:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:53.823 ************************************ 00:25:53.823 START TEST bdev_json_nonarray 00:25:53.823 ************************************ 00:25:53.823 09:18:30 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:53.823 [2024-11-06 09:18:30.210062] Starting SPDK v25.01-pre git sha1 a59d7e018 / DPDK 24.03.0 initialization... 00:25:53.823 [2024-11-06 09:18:30.210235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91180 ] 00:25:54.081 [2024-11-06 09:18:30.383726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.081 [2024-11-06 09:18:30.515632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.081 [2024-11-06 09:18:30.515760] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:54.081 [2024-11-06 09:18:30.515799] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:54.081 [2024-11-06 09:18:30.515851] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:54.356 00:25:54.356 real 0m0.657s 00:25:54.356 user 0m0.421s 00:25:54.356 sys 0m0.131s 00:25:54.356 09:18:30 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:54.356 09:18:30 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:25:54.356 ************************************ 00:25:54.356 END TEST bdev_json_nonarray 00:25:54.356 ************************************ 00:25:54.356 09:18:30 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:25:54.356 09:18:30 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:25:54.356 09:18:30 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:25:54.356 09:18:30 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:25:54.356 09:18:30 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:25:54.356 09:18:30 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:54.356 09:18:30 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:54.356 09:18:30 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:25:54.356 09:18:30 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:25:54.356 09:18:30 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:25:54.356 09:18:30 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:25:54.356 00:25:54.356 real 0m49.095s 00:25:54.356 user 1m7.194s 00:25:54.356 sys 0m5.191s 00:25:54.356 09:18:30 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:54.356 09:18:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:25:54.356 ************************************ 00:25:54.356 END TEST blockdev_raid5f 00:25:54.356 ************************************ 00:25:54.356 09:18:30 -- spdk/autotest.sh@194 -- # uname -s 00:25:54.356 09:18:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:25:54.356 09:18:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:25:54.356 09:18:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:25:54.356 09:18:30 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:25:54.356 09:18:30 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:25:54.356 09:18:30 -- spdk/autotest.sh@256 -- # timing_exit lib 00:25:54.356 09:18:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:54.356 09:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:54.623 09:18:30 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:54.623 09:18:30 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:25:54.623 09:18:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:54.623 09:18:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:54.623 09:18:30 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:25:54.623 09:18:30 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:25:54.623 09:18:30 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:25:54.623 09:18:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:54.623 09:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:54.623 09:18:30 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:25:54.623 09:18:30 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:25:54.623 09:18:30 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:25:54.623 09:18:30 -- common/autotest_common.sh@10 -- # set +x 00:25:56.000 INFO: APP EXITING 00:25:56.000 INFO: killing all VMs 00:25:56.000 INFO: killing vhost app 00:25:56.000 INFO: EXIT DONE 00:25:56.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:56.517 Waiting for block devices as requested 00:25:56.517 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:56.517 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:57.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:57.342 Cleaning 00:25:57.342 Removing: /var/run/dpdk/spdk0/config 00:25:57.342 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:57.342 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:57.342 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:57.342 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:57.342 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:57.342 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:57.342 Removing: /dev/shm/spdk_tgt_trace.pid56768 00:25:57.342 Removing: /var/run/dpdk/spdk0 00:25:57.342 Removing: /var/run/dpdk/spdk_pid56533 00:25:57.342 Removing: /var/run/dpdk/spdk_pid56768 00:25:57.342 Removing: /var/run/dpdk/spdk_pid56997 00:25:57.342 Removing: /var/run/dpdk/spdk_pid57101 00:25:57.342 Removing: /var/run/dpdk/spdk_pid57152 00:25:57.342 Removing: /var/run/dpdk/spdk_pid57280 00:25:57.342 Removing: /var/run/dpdk/spdk_pid57309 00:25:57.342 Removing: /var/run/dpdk/spdk_pid57508 00:25:57.342 Removing: /var/run/dpdk/spdk_pid57625 00:25:57.342 Removing: /var/run/dpdk/spdk_pid57732 00:25:57.342 Removing: /var/run/dpdk/spdk_pid57854 00:25:57.342 Removing: /var/run/dpdk/spdk_pid57962 00:25:57.342 Removing: /var/run/dpdk/spdk_pid58007 00:25:57.342 Removing: /var/run/dpdk/spdk_pid58044 00:25:57.342 Removing: /var/run/dpdk/spdk_pid58114 00:25:57.342 Removing: /var/run/dpdk/spdk_pid58226 00:25:57.342 Removing: /var/run/dpdk/spdk_pid58695 00:25:57.342 Removing: /var/run/dpdk/spdk_pid58770 00:25:57.342 Removing: /var/run/dpdk/spdk_pid58844 00:25:57.342 Removing: /var/run/dpdk/spdk_pid58862 00:25:57.342 Removing: /var/run/dpdk/spdk_pid59014 00:25:57.342 Removing: /var/run/dpdk/spdk_pid59031 00:25:57.342 Removing: /var/run/dpdk/spdk_pid59179 00:25:57.342 Removing: /var/run/dpdk/spdk_pid59201 00:25:57.342 Removing: /var/run/dpdk/spdk_pid59276 00:25:57.342 Removing: /var/run/dpdk/spdk_pid59294 00:25:57.342 Removing: /var/run/dpdk/spdk_pid59363 00:25:57.342 Removing: /var/run/dpdk/spdk_pid59387 00:25:57.342 Removing: /var/run/dpdk/spdk_pid59582 00:25:57.342 Removing: /var/run/dpdk/spdk_pid59619 00:25:57.342 Removing: /var/run/dpdk/spdk_pid59702 00:25:57.342 Removing: /var/run/dpdk/spdk_pid61078 00:25:57.342 Removing: /var/run/dpdk/spdk_pid61290 00:25:57.342 Removing: /var/run/dpdk/spdk_pid61441 00:25:57.342 Removing: /var/run/dpdk/spdk_pid62090 00:25:57.342 Removing: /var/run/dpdk/spdk_pid62307 00:25:57.342 Removing: /var/run/dpdk/spdk_pid62447 00:25:57.342 Removing: /var/run/dpdk/spdk_pid63107 00:25:57.342 Removing: /var/run/dpdk/spdk_pid63437 00:25:57.342 Removing: /var/run/dpdk/spdk_pid63587 00:25:57.342 Removing: /var/run/dpdk/spdk_pid65001 00:25:57.342 Removing: /var/run/dpdk/spdk_pid65265 00:25:57.342 Removing: /var/run/dpdk/spdk_pid65405 00:25:57.342 Removing: /var/run/dpdk/spdk_pid66832 00:25:57.342 Removing: /var/run/dpdk/spdk_pid67085 00:25:57.342 Removing: /var/run/dpdk/spdk_pid67235 00:25:57.342 Removing: /var/run/dpdk/spdk_pid68648 00:25:57.342 Removing: /var/run/dpdk/spdk_pid69101 00:25:57.342 Removing: /var/run/dpdk/spdk_pid69246 00:25:57.342 Removing: /var/run/dpdk/spdk_pid70760 00:25:57.342 Removing: /var/run/dpdk/spdk_pid71030 00:25:57.342 Removing: /var/run/dpdk/spdk_pid71179 00:25:57.342 Removing: /var/run/dpdk/spdk_pid72688 00:25:57.342 Removing: /var/run/dpdk/spdk_pid72958 00:25:57.342 Removing: /var/run/dpdk/spdk_pid73104 00:25:57.342 Removing: /var/run/dpdk/spdk_pid74617 00:25:57.342 Removing: /var/run/dpdk/spdk_pid75121 00:25:57.342 Removing: /var/run/dpdk/spdk_pid75267 00:25:57.342 Removing: /var/run/dpdk/spdk_pid75405 00:25:57.342 Removing: /var/run/dpdk/spdk_pid75863 00:25:57.342 Removing: /var/run/dpdk/spdk_pid76629 00:25:57.342 Removing: /var/run/dpdk/spdk_pid77013 00:25:57.342 Removing: /var/run/dpdk/spdk_pid77708 00:25:57.342 Removing: /var/run/dpdk/spdk_pid78186 00:25:57.602 Removing: /var/run/dpdk/spdk_pid78980 00:25:57.602 Removing: /var/run/dpdk/spdk_pid79401 00:25:57.602 Removing: /var/run/dpdk/spdk_pid81400 00:25:57.602 Removing: /var/run/dpdk/spdk_pid81851 00:25:57.602 Removing: /var/run/dpdk/spdk_pid82306 00:25:57.602 Removing: /var/run/dpdk/spdk_pid84424 00:25:57.602 Removing: /var/run/dpdk/spdk_pid84919 00:25:57.602 Removing: /var/run/dpdk/spdk_pid85423 00:25:57.602 Removing: /var/run/dpdk/spdk_pid86503 00:25:57.602 Removing: /var/run/dpdk/spdk_pid86830 00:25:57.602 Removing: /var/run/dpdk/spdk_pid87788 00:25:57.602 Removing: /var/run/dpdk/spdk_pid88117 00:25:57.602 Removing: /var/run/dpdk/spdk_pid89078 00:25:57.602 Removing: /var/run/dpdk/spdk_pid89402 00:25:57.602 Removing: /var/run/dpdk/spdk_pid90090 00:25:57.602 Removing: /var/run/dpdk/spdk_pid90366 00:25:57.602 Removing: /var/run/dpdk/spdk_pid90433 00:25:57.602 Removing: /var/run/dpdk/spdk_pid90475 00:25:57.602 Removing: /var/run/dpdk/spdk_pid90726 00:25:57.602 Removing: /var/run/dpdk/spdk_pid90906 00:25:57.602 Removing: /var/run/dpdk/spdk_pid91001 00:25:57.602 Removing: /var/run/dpdk/spdk_pid91101 00:25:57.602 Removing: /var/run/dpdk/spdk_pid91154 00:25:57.602 Removing: /var/run/dpdk/spdk_pid91180 00:25:57.602 Clean 00:25:57.602 09:18:33 -- common/autotest_common.sh@1451 -- # return 0 00:25:57.602 09:18:33 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:25:57.602 09:18:33 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:57.602 09:18:33 -- common/autotest_common.sh@10 -- # set +x 00:25:57.602 09:18:34 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:25:57.602 09:18:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:57.602 09:18:34 -- common/autotest_common.sh@10 -- # set +x 00:25:57.602 09:18:34 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:57.602 09:18:34 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:57.602 09:18:34 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:57.602 09:18:34 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:25:57.602 09:18:34 -- spdk/autotest.sh@394 -- # hostname 00:25:57.602 09:18:34 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:57.926 geninfo: WARNING: invalid characters removed from testname! 00:26:24.482 09:18:59 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:27.014 09:19:03 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:29.560 09:19:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:32.087 09:19:08 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:35.364 09:19:11 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:37.261 09:19:13 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:39.788 09:19:16 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:39.788 09:19:16 -- spdk/autorun.sh@1 -- $ timing_finish 00:26:39.788 09:19:16 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:26:39.788 09:19:16 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:39.788 09:19:16 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:39.788 09:19:16 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:40.047 + [[ -n 5205 ]] 00:26:40.047 + sudo kill 5205 00:26:40.056 [Pipeline] } 00:26:40.072 [Pipeline] // timeout 00:26:40.079 [Pipeline] } 00:26:40.093 [Pipeline] // stage 00:26:40.098 [Pipeline] } 00:26:40.112 [Pipeline] // catchError 00:26:40.121 [Pipeline] stage 00:26:40.123 [Pipeline] { (Stop VM) 00:26:40.136 [Pipeline] sh 00:26:40.412 + vagrant halt 00:26:43.696 ==> default: Halting domain... 00:26:48.973 [Pipeline] sh 00:26:49.295 + vagrant destroy -f 00:26:52.597 ==> default: Removing domain... 00:26:52.866 [Pipeline] sh 00:26:53.144 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:26:53.153 [Pipeline] } 00:26:53.168 [Pipeline] // stage 00:26:53.176 [Pipeline] } 00:26:53.192 [Pipeline] // dir 00:26:53.197 [Pipeline] } 00:26:53.217 [Pipeline] // wrap 00:26:53.240 [Pipeline] } 00:26:53.292 [Pipeline] // catchError 00:26:53.298 [Pipeline] stage 00:26:53.300 [Pipeline] { (Epilogue) 00:26:53.307 [Pipeline] sh 00:26:53.578 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:00.178 [Pipeline] catchError 00:27:00.180 [Pipeline] { 00:27:00.192 [Pipeline] sh 00:27:00.472 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:00.472 Artifacts sizes are good 00:27:00.480 [Pipeline] } 00:27:00.494 [Pipeline] // catchError 00:27:00.506 [Pipeline] archiveArtifacts 00:27:00.513 Archiving artifacts 00:27:00.636 [Pipeline] cleanWs 00:27:00.650 [WS-CLEANUP] Deleting project workspace... 00:27:00.650 [WS-CLEANUP] Deferred wipeout is used... 00:27:00.668 [WS-CLEANUP] done 00:27:00.670 [Pipeline] } 00:27:00.685 [Pipeline] // stage 00:27:00.690 [Pipeline] } 00:27:00.703 [Pipeline] // node 00:27:00.709 [Pipeline] End of Pipeline 00:27:00.747 Finished: SUCCESS